this post was submitted on 04 Sep 2024
2 points (100.0% liked)
Technology
59587 readers
4578 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why are you so confident that the things you are learning from AI are correct? Are you just using it to gather other sources to review by hand or are you trying to have conversations with the AI?
We've all seen AI get the correct answer but the show your work part is nonsense, or vice versa. How do you verify what AI outputs to you?
I mean, why are you confident the work in textbooks is correct? Both have been proven unreliable, though I will admit LLMs are much more so.
The way you verify in this instance is actually going through the work yourself after you’ve been shown sources. They are explicitly not saying they take 1+1=3 as law, but instead asking how that was reached and working off that explanation to see if it makes sense and learn more.
Math is likely the best for this too. You have undeniable truths in math, it’s true, or it’s false. There are no (meaningful) opinions on how addition works other than the correct one.
The problem with this style of verification is that there is no authoritative source. Neither the AI nor yourself is capable of verifying for accuracy. The AI also has no expectation of being accurate or revised.
I don't see how this is any better than running google searches on reddit or other message boards looking for relevant discussions and basing your knowledge on those.
If AI was enabling something new that might be worth it but allowing someone to find slightly less/more shitty message board posts 10% more efficiently isnt worth what's happening. There are countries that are capable of regulation as a field fills out, why can't america? We banned tiktok in under a month didnt we?
I use it for explaining stuff when studying for uni and I do it like this: If I don't understand e.g. a definition, I ask an LLM to explain it, read the original definition again and see if it makes sense.
This is an informal approach, but if the definition is sufficiently complex, false answers are unlikely to lead to an understanding. Not impossible ofc, so always be wary.
For context: I'm studying computer science, so lots of math and theoretical computer science.
I, like the OP, was also studying math from a textbook and using GPT4 to help clear things up. GPT4 caught an error in the textbook.
The LLM doesn't have a theory of mind, it wont start over and try to explain a concept from a completely new angle, it mostly just repeats the same stuff over and over. Still, once I have figured something out, I can ask the LLM if my ideas are correct and it sometimes makes small corrections.
Overall, most of my learning came from the textbook, and talking with the LLM about the concepts I had learned helped cement them in my brain. I didn't learn a whole lot from the LLM directly, but it was good enough to confirm what I learned from the textbook and sometimes correct mistakes.
I personally use it's answers as a jumping off point to do my own research, or I ask it for sources directly about things and check those out. I frequently use LLMs for learning about topics, but definitely don't take anything they say at face value.
For a personal example, I use ChatGPT as my personal Japanese tutor. I use it discuss and break down nuances of various words or sayings, names of certain conjugation forms etc. etc., and it is absolutely not 100% correct, but I can now take the names of things that it gives me in native Japanese that I never would have known and look them up using other resources. Either it's correct and I find confirming information, or it's wrong and I can research further independently or ask it follow up questions. It's certainly not as good as a human native speaker, but for $20 a month and as someone who likes enjoys doing their own research, I fucking love it.
Hey, that's a cool thing to do! I'll try it. Learning a new language through LLMs sounds cool.
You check it's work. I used it to calculate efficiency in a factory game and went through and made corrections to inconsistencies I spotted. Always check it's work.
Exactly. It's a helpful tool but it needs to be used responsibly. Writing it off completely is as bad a take as blindly accepting everything it spits out.
I'm not at all confident in the answers directly. I've gotten plenty of wrong answers form AI and I've gotten plenty of correct answers. If anything it's just more practice for critical thinking skills, separating what is true and what isn't.
When it comes to math though, it's pretty straightforward, I'm just looking for context on some steps in the problems, maybe reminders of things I learned years ago and have forgotten, that sort of thing. As I said, I'm interested in actually understanding the stuff that I'm learning because I am using it for the things I'm working on so I'm mainly reading through textbooks and using AI as well as other sources online to round out my understanding of the concepts. If I'm getting the right answers and the things I am doing are working, it's a good indicator I'm on the right path.
It's not like I'm doing cutting edge physics or medical research where mistakes could cause lives.
Its sort of similar to saying poppy production overall is pretty negative, but if smart critical people use it sparingly and apprehensively, opiates could be of great benefit to that person.
Thats all well and good and all but AI is not being developed to help critical thinkers research slightly easier, its being created to reduce the amount of money companies spend on humans.
Until regulations are in place to guide the development of the technology in useful ways then I dont know any of it should be permitted. What's the rush for anyways?
Well I'm definitely not pushing for more AI and I like to try to stay nuanced on the topic. Like I mentioned in my first comment I have found it to be a very helpful tool but if used in other ways it could do more harm than good. I'm not involved in making or pushing AI but as long as it is an available tool I'm going to make use of it in the most responsible way I can and talk about how I use it knowing that I can't control what other people do but maybe I could help some people who are only using it to get answer hints like in the article to find more useful ways of using it.
When it comes to regulation, yeah I'm all for that. It's a sad reality that regulation always lags behind and generally doesn't get implemented until there's some sort of problem that scares the people in power who are mostly too old to understand what's happening anyways.
And as to what's the rush, I would say a combination of curiosity and good intentions mixed with the worst of capitalism, the carrot of financial gain for success and the stick of financial ruin for failure and I don't have a clue what percent of the pie each part makes up. I'm not saying it's a good situation but it's the way things go and I don't think anyone alive could stop it. Once something is out of the bag, there ain't any putting it back.
Basically I'm with you that it will be used for things that make life worse for people and that sucks, and it would be great if that was not the case but that doesn't change the fact that I can't do anything about that and meanwhile it can still be a useful tool and so I'm going to use it the best that I can regardless how others use it because there's really nothing I can do except keep pushing forward the best I can, just like anyone else.