this post was submitted on 04 Sep 2024
2 points (100.0% liked)

Technology

59587 readers
4578 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

top 50 comments
sorted by: hot top controversial new old
[–] 2ugly2live@lemmy.world 1 points 2 months ago (5 children)

I don't even know of this is ChatGPT's fault. This would be the same outcome if someone just gave them the answers to a study packet. Yes, they'll have the answers because someone (or something) gave it to them, but won't know how to get that answer without teaching them. Surprise: For kids to learn, they need to be taught. Shocker.

load more comments (5 replies)
[–] N0body@lemmy.dbzer0.com 1 points 2 months ago (8 children)

Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot. Imagine what would happen if a tiny fraction of the billions spent to develop this technology went into funding improved traditional instruction.

Better paid teachers, better resources, studies geared at optimizing traditional instruction, etc.

Move fast and break things was always a stupid goal. Turbocharging it with all this money is killing the tried and true options that actually produce results, while straining the power grid and worsening global warming.

[–] TallonMetroid@lemmy.world 1 points 2 months ago (1 children)

Investing in actual education infrastructure won't get VC techbros their yachts, though.

[–] elvith@feddit.org 0 points 2 months ago

It’s the other way round: Education makes for less gullible people and for workers that demand more rights more freely and easily - and then those are coming for their yachts…

[–] Petter1@lemm.ee 0 points 2 months ago

Imagine all the money spent on war would be invested into education 🫣what a beautiful world we would live in.

[–] otp@sh.itjust.works 0 points 2 months ago (2 children)

Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot.

Interesting way of looking at it. I disagree with your conclusion about the study, though.

It seems like the AI tool would be helpful for things like assignments rather than tests. I think it's intellectually dishonest to ignore the gains in some environments because it doesn't have gains in others.

You're also comparing a young technology to methods that have been adapted over hundreds of thousands of years. Was the first automobile entirely superior to every horse?

I get that some people just hate AI because it's AI. For the people interested in nuance, I think this study is interesting. I think other studies will seek to build on it.

load more comments (2 replies)
load more comments (5 replies)
[–] Ilandar@aussie.zone 0 points 2 months ago (3 children)

What do the results of the third group suggest? AI doesn't appear to have hindered their ability to manage by themselves under test conditions, but it did help them significantly with their practice results. You could argue the positive reinforcement an AI tutor can provide during test preparations might help some students with their confidence and pre-exam nerves, which will allow them to perform closer to their best under exam conditions.

[–] cheddar@programming.dev 0 points 2 months ago

Yep. But the post title suggest that all students who used ChatGPT did worse. Fuck this clickbait shit.

[–] Petter1@lemm.ee 0 points 2 months ago
[–] nyan@lemmy.cafe 0 points 2 months ago* (last edited 2 months ago)

It suggests that the best the chatbot can do, after being carefully tailored for its job, is no better than the old methods (because the goal is for the students to be able to handle the subject matter without having to check every common operation with a third party, regardless of whether that's a chatbot or a textbook, and the test is the best indicator of that). Therefore, spending the electricity to run an educational chatbot for highschoolers isn't justified at this time, but it's probably worth rechecking in a few years to see if its results have improved. It may also be worth doing extended testing to determine whether there are specific subsets of the student body that benefit more from the chatbot than others. And allowing the students to seek out an untailored chatbot on their own is strongly counterindicated.

[–] Vanth@reddthat.com 0 points 2 months ago (5 children)

I'm not entirely sold on the argument I lay out here, but this is where I would start were I to defend using chatGPT in school as they laid out in their experiment.

It's a tool. Just like a calculator. If a kid learns and does all their homework with a calculator, then suddenly it's taken away for a test, of course they will do poorly. Contrary to what we were warned about as kids though, each of us does carry a calculator around in our pocket at nearly all times.

We're not far off from having an AI assistant with us 24/7 is feasible. Why not teach kids to use the tools they will have in their pocket for the rest of their lives?

[–] Schal330@lemmy.world 0 points 2 months ago (1 children)

As adults we are dubious of the results that AI gives us. We take the answers with a handful of salt and I feel like over the years we have built up a skillset for using search engines for answers and sifting through the results. Kids haven't got years of experience of this and so they may take what is said to be true and not question the results.

As you say, the kids should be taught to use the tool properly, and verify the answers. AI is going to be forced onto us whether we like it or not, people should be empowered to use it and not accept what it puts out as gospel.

[–] Petter1@lemm.ee 0 points 2 months ago

This is true for the whole internet, not only AI Chatbots. Kids need to get teached that there is BS around. In fact kids had to learn that even pre-internet. Every human has to learn that you can not blindly trust anything, that one has to think critically. This is nothing new. AI chatbots just show how flawed human education is these days.

[–] filister@lemmy.world 0 points 2 months ago (1 children)

I think here you also need to teach your kid not to trust unconditionally this tool and to question the quality of the tool. As well as teaching it how to write better prompts, this is the same like with Google, if you put shitty queries you will get subpar results.

And believe me I have seen plenty of tech people asking the most lame prompts.

[–] otp@sh.itjust.works 0 points 2 months ago (3 children)

I remember teachers telling us not to trust the calculators. What if we hit the wrong key? Lol

Some things never change.

[–] Deceptichum@quokk.au 0 points 2 months ago (1 children)

I remember the teachers telling us not to trust Wikipedia, but they had utmost faith in the shitty old books that were probably never verified by another human before being published.

[–] isolatedscotch@discuss.tchncs.de 0 points 2 months ago (1 children)

i mean, usually wikipedia's references ARE from those old books

[–] Deceptichum@quokk.au 0 points 2 months ago (3 children)

Eh I find they’re usually from a more direct source. The schoolbooks are just information sourced from who knows where else.

load more comments (3 replies)
load more comments (2 replies)
load more comments (3 replies)
[–] kusivittula@sopuli.xyz 0 points 2 months ago (1 children)
[–] LifeInMultipleChoice@lemmy.world 0 points 2 months ago (13 children)

"tests designed for use by people who don't use chatgpt is performed by people who don't"

This is the same fn calculator argument we had 20 years ago.

A tool is a tool. It will come in handy, but if it will be there in life, then it's a dumb test

[–] conciselyverbose@sh.itjust.works 0 points 2 months ago

The point of learning isn't just access to that information later. That basic understanding gets built on all the way up through the end of your education, and is the base to all sorts of real world application.

There's no overlap at all between people who can't pass a test without an LLM and people who understand the material.

load more comments (12 replies)
[–] brey1013@lemmy.world 0 points 2 months ago

Shocked, I tell you!

[–] maegul@lemmy.ml 0 points 2 months ago (1 children)

Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

Tech is hyper focused on removing the "doing" and reproducing the result. Now that it's trying to put itself into the "thinking" part of human work, this tension is making itself unavoidable.

I think we can all take it as a given that we don't want to hand total control to machines, simply because of accountability issues. Which means we want a human "in the loop" to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn't mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn't be surprised if banning them outright in what may become a harshly fought battle isn't too far away.

All that being said, I also think LLMs raise questions about what it is we're doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn't worth assessing. Of course, as I've said above, that's likely manifestly rubbish ... building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.

[–] Passerby6497@lemmy.world 0 points 2 months ago

LLMs likely have no good role to play in education and I wouldn't be surprised if banning them outright in what may become a harshly fought battle isn't too far away.

While I agree that LLMs have no place in education, you're not going to be able to do more than just ban them in class unfortunately. Students will be able to use them at home, and the alleged "LLM detection" applications are no better than throwing a dart at the wall. You may catch a couple students, but you're going to falsely accuse many more. The only surefire way to catch them is them being stupid and not bothering to edit what they turn in.

[–] thecheddarcheese@lemmy.blahaj.zone 0 points 2 months ago (3 children)

I mean, is it really that surprising? You're not analyzing anything, an algorithm just spits text at you. You're not gonna learn much from that.

[–] Ledivin@lemmy.world 0 points 2 months ago

You could always try reading the article

[–] daniskarma@lemmy.dbzer0.com 0 points 2 months ago (2 children)

In the study they said they used a modified version that acted as a tutor, that refused to give direct answers and gave hints to the solution instead.

[–] Lobreeze@lemmy.world 0 points 2 months ago

That's like cheating with extra steps.

Ain't getting hints on your in class exam.

load more comments (1 replies)
load more comments (1 replies)
[–] Insig@lemmy.world 0 points 2 months ago (6 children)

At work we give a 16/17 year old, work experience over the summer. He was using chatgpt and not understanding the code that was outputing.

I his last week he asked why he doing print statement something like

print (f"message {thing} ")

load more comments (6 replies)
[–] vin@lemmynsfw.com 0 points 2 months ago (1 children)

Did those using tutor AI spend less time on learning? That would have been worth measuring

[–] EatATaco@lemm.ee 0 points 2 months ago

Interesting thought, I would be curious about this too.

[–] trougnouf@lemmy.world 0 points 2 months ago (1 children)

The title is pretty misleading. Kids who used ChatGPT to get hints/explanations rather than outright getting the answers did as well as those who had no access to ChatGPT. They probably had a much easier time studying/understanding with it so it's a win for LLMs as a teaching tool imo.

[–] Apollo42@lemmy.world 0 points 2 months ago* (last edited 2 months ago) (4 children)

Is it really a win for LLMs if the study found no significant difference between those using it as a tutor and those not?

[–] 8andage@sh.itjust.works 0 points 2 months ago* (last edited 2 months ago)

Maybe using llm assistance was less stressful or quicker than self study. The tutoring focused llm is definitely better than allowing full access to gpt itself, which is what is currently happening

load more comments (3 replies)
[–] arin@lemmy.world 0 points 2 months ago (2 children)

Would kids do better if the AI doesn't hallucinate?

[–] finitebanjo@lemmy.world 0 points 2 months ago

Would snails be happier if it kept raining? What can we do to make it rain forever and all time?

load more comments (1 replies)
load more comments
view more: next ›