this post was submitted on 06 Jul 2024
1 points (100.0% liked)
Technology
59672 readers
3098 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
For anything where you would ever expect a predictable, useful outcome to an arbitrary input. There is no possible path to LLMs ever doing anything close to that.
LLMs aren't driving cars. LLMs aren't doing financial modeling. Those are entirely different tools with heavily hand crafted models to specific applications.
Anyone using an LLM to provide therapy should get multiple life sentences in prison regardless of outcomes. There is no possible way to LLMs ever being actually useful for therapy. It's just a random text generator that's tuned well enough to sound good. It has no substance and the underlying tech cannot possibly develop substance.
I can't tell if you're suggesting that foundation models (which is the underpinning technology of LLMs) aren't being used for the things that I said they're being used for, but I can assure you they are, either in commercial R&D or in live commercial products.
The fact that they shouldn't be used for these things is something we can certainly agree on, but the fact remains that they are.
Sources:
Wayve is using foundation models for driving, and I am under the impression that their neural net extends all the way from sensor input to motor control: https://wayve.ai/thinking/introducing-gaia1/
Research recommending the use of LLMs for giving financial advice: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4850039
LLMs for therapy: https://blog.langchain.dev/mental-health-therapy-as-an-llm-state-machine/
So this all goes back to my point that some form of accountability is needed for how these tools get used. I haven't examined the specific legislation proposal enough to give any firm opinion on it, but I think it's a good thing that the conversation is happening in a serious way.