Emotional_Series7814
“We believe that users should have a say in how their attention is directed, and developers should be free to experiment with new ways of presenting information,” Bluesky’s chief executive, Jay Graber, told me in an email message.
Of course, there are also challenges to algorithmic choice. When the Stanford political science professor Francis Fukuyama led a working group that in 2020 proposed outside entities offer algorithmic choice, critics chimed in with many concerns.
Robert Faris and Joan Donovan, then of Harvard’s Shorenstein Center, wrote that they were worried that Fukuyama’s proposal could let platforms off the hook for their failures to remove harmful content. Nathalie Maréchal, Ramesh Srinivasan and Dipayan Ghosh argued that his approach would do nothing to change the some tech platforms’ underlying business model that incentivizes the creation of toxic and manipulative content.
Mr. Fukuyama agreed that his solution might not help reduce toxic content and polarization. “I deplore the toxicity of political discourse in the United States and other democracies today, but I am not willing to try solving the problem by discarding the right to free expression,” he wrote in response to the critics.
When she ran the ethics team at Twitter, Rumman Chowdhury developed prototypes for offering users algorithmic choice. But her research revealed that many users found it difficult to envision having control of their feed. “The paradigm of social media that we have is not one in which people understand having agency,” said Ms. Chowdhury, whose Twitter team was let go when Mr. Musk took over. She went on to found the nonprofit Humane Intelligence.
But just because people don’t know they want it doesn’t mean that algorithmic choice is not important. I didn’t know I wanted an iPhone until I saw one.
And with another national election looming and disinformation circulating wildly, I believe that asking people to choose disinformation — rather than to accept it passively — would make a difference. If users had to pick an antivaccine news feed, and to see that there are other feeds to choose from, the existence of that choice would itself be educational.
Algorithms make our choices invisible. Making those choices visible is an important step in building a healthy information ecosystem.
Here’s the text!
Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.
But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.
There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our pre-existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.
Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.
Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.
Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define, but that largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”
The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand.
In my ideal world, I would like to be able to choose my feed from a list of providers. I would love to have a feed put together by librarians, who are already expert at curating information, or from my favorite news outlet. And I’d like to be able to compare what a feed curated by the American Civil Liberties Union looks like compared with one curated by the Heritage Foundation. Or maybe I just want to use my friend Susie’s curation, because she has great taste.
There is a growing worldwide movement to provide us with some algorithmic choice — from a Belgrade group demanding that recommender algorithms should be a “public good” to European regulators who are demanding that platforms give users at least one algorithm option that is not based on tracking user behavior.
One of the first places to start making this vision a reality is a social network called Bluesky, which recently opened up its data to allow developers to build custom algorithms. The company, which is financially supported by the Twitter founder Jack Dorsey, said that 20 percent of its 265,000 users are using custom feeds.
On my Bluesky feed, I often toggle between feeds called Tech News, Cute Animal Pics, PositiviFeed and my favorite, Home+, which includes “interesting content from your extended social circles.” Some of them were built by Bluesky developers, and others were created by outside developers. All I have to do is go to My Feeds and select a feed from a wide menu of choices including from MLB+, a feed about baseball, to #Disability, one that picks up keywords related to disability or UA fundraising, a feed of Ukrainian fund-raising posts.
Choosing from this wide selection of feeds frees me from having to decide whom to follow. Switching social networks is less exhausting — I don’t have to rebuild my Twitter network. Instead, I can just dip my toes into already curated feeds that introduce me to new people and topics.
I forgot to say this in my original comment: the extremely frustrating part is that they did protest. They shut down the sub on Reddit. But they won’t do the actual effective way to hurt Reddit: by moving to its alternatives, and/or helping those who moved.
I get it, it’s just Reddit, it’s not like going to the website is breaking a real picket line workers made to get better wages, but it’s still frustrating.
It’s kind of frustrating that Reddit and Discord don’t want to do anything with us. It would help numbers-wise, but no, stay on Discord which is bad for searching and Reddit where we tried to have an exodus from. Now all the content is on Reddit and all we have is reposts. Which encourages people to go crawling back to Reddit if they want content and aren’t hardcore principled about staying off.
Which includes me: I keep trying to be active on the Fediverse and to help it grow, but there are lots of niche communities there that don’t exist here. I’d make them myself but I’m very much not cut out for a moderation role.
I know I should make more content myself. However, I’m not seeing much drama in my hobbies right now despite heavily immersing myself in them. I could actively look for drama in hobbies I don’t participate in to write up here, but honestly I don’t feel like it.
I’m not into Lolita fashion myself, but this seems a similar to a discussion I saw about Dungeons & Dragons. Why spend all that time homebrewing a bunch of new systems and rules for situations when there’s another TTRPG system that already has systems and rules for those situations? Because some peoples’ enjoyment comes from tinkering with an existing product, adding onto it. They also get a starting point instead of having to build a whole new TTRPG, design and sew an entire new dress, for themselves. For some, it’s easier to see a pretty dress and change it to fit yourself than it is to think of a design for a pretty dress you’ll love as much. It’s easier to start a Dungeons & Dragons campaign and add on rules to tack onto it than to come up with a whole new campaign or rule set yourself, or to come up with the same campaign idea with the other TTRPG’s world as a basis.
Thank you for crediting the original author. Here is a link to the original post that will not give Reddit any hits.
Remember the eleven separate times (and two HobbyDrama posts) where people leaked classified military details because they didn’t like the inaccuracy of the military vehicles in the video game War Thunder? It happened again.
Probably going to do a writeup on this, including both this new leak and the old ones that the two HobbyDrama posts didn’t cover. Right now it’s breaking drama so I need to wait.