this post was submitted on 20 Jul 2023
62 points (100.0% liked)
Technology
37739 readers
500 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Correct me if I'm wrong, but I thought the big risk with AI is its use as a disinfo tool. Skynet ain't no where near close, but a complete post truth world is possible. It's already bad now... Could you imagine AI generated recordings of crimes that are used as evidence against people? There are already scam callers that use recordings to make people think theyve kidnapped relatives.
I really feel like most people aren't afraid of the right things when it comes to AI.
That's largely what these specialists are talking about. People emphasising the existential apocalypse scenarios when there are more pressing matters. I think purpose of the tools in mind should be more of a concern than the training data as well in many cases. People keep freaking out about LLMs and art models while still ignoring the plague of models built specifically to manipulate and predict subconscious habits and activities of individuals. Models built specifically to recreate the concept of a unique individual and their likeness for financial reason should also be regulated in new unique ways. People shouldn't be able to be bought wholesale, but to sell their likeness as a subscription with rights to withdraw from future production, etc.
I think the ways we think about a lot of things have to change based around the type of society we want. I vote getting away from a system that lets a few own everything until people no longer have the right to live.
Indeed, because the AI just makes shit up.
That was the problem with the lawyer who brought bullshit ChatGPT cases into court.
Hell, last week I did a search for last year's Super Bowl and learned that Patrick Mahomes apparently won it by kicking a game-winning field goal.
Disinfo is a huge, huge problem with these half-baked AI tools.
Isn't this already possible though? Granted, AI can do this exponentially faster, write the article generate deepfakes and then publish or whatever. But... Again, can't just regular people already do this if they want? I mean, with the obvious aside, it's not AI that are generating deepfakes of politicians and celebrities, it's people using the tool.
It's been said already, but AI as a tool can be abused just like anything else. It's not AI that is unethical (necessarily), it is humans that use it unethically.
I dunno. I guess I just think about the early internet and the amount of shareware and forwards-from-grandma (if you read this letter you have 5 seconds to share it, early 2000's type stuff) and how it's evolved into text to speech DIY crafts. AI is just the next step that we were already headed down. We were doing all this without AI already, it's just so much more accessible now (which IMO, is the only way for AI to truly be used for good. Either it's 100% accessible for all or it's hoarded away.)
This also means that there are going to be people who use it for shitty reasons. These are the same types of people for why we have signs and laws in the first place.
It seems to come down to do we let something that can do harm be used despite it? I think there's levels, but I think the potential for good is just as high as the potential for disaster. It seems wrong to stop the use of AI possibly finding cures for cancer and genetic sequencing for other ailments just because some creeps can use it for deepfakes. Otherwise, the deepfakes would still have existed without AI and we would be without any of the benefits that AI could give us.
Note: for as interested and hopeful as I am for AI as a tool, I also try to be very aware of how harmful it could be. But most ways I look at it, somehow people like you and I using AI in various ways for personal projects, good or bad, just seems inconcequntial compared to the sheer speed with which AI can create. Be it code, assets, images, text, puzzles and patterns, we have one of our first major technological advancements and half of us are arguing over who gets to use it and why they shouldn't.
Last little tidbit: think about AI art/concepts you've seen in the last year. Gollum as playing cards, teenage mutant ninja turtles as celebs, deepfakes, whathaveyou. Think about the last time you saw AI art. Do you feel as awed/impressed/annoyed by the AI art of last year to the AI art of yesterday? Probably not, you probably saw it, thought AI, and moved on.
I've got a social hypothesis that this is what deepfake generations are going to be like. It doesn't matter what images get created because a thousand other creeps had the same idea and posted the same thing. At a certain point the desensitization onsets and it becomes redundant. So just because this can happen slightly more easily, we are going to sideline all of the rest of the good the tool can do?
Don't get me wrong, I don't disagree by any means. It's an interesting place to be stuck, I'm just personally hoping the solution is pro-consumer. I really think a version of current AI could be a massive gain to people's daily lives, even if it's just for hobbies and mild productivity. But it only works if everyone gets it.