Thursday, January 9, 2025

Is the algorithm the terrorist?

cybertruck exploding
An echo in the echo tunnel
After the horrific event in New Orleans in the early hours of New Year's Day, along with its bizarre echo in front of the Las Vegas Trump International and the theories floated around the radicalization of 
Shamsud-Din Jabbar(a native Texan, army veteran, realtor--one of us, not one of "them"), a question troubled my mind: how is it that social media helps madmen find their brethren while isolating
the sane?

In other words:

Is the algorithm the real terrorist?

Is the answer to my first question as simple as this: because the sane do not engage in the constant fruitless and exhausting disputes that social media encourages and promotes? Designed to be fruitless, mind you, because consensus in debate would clear both combatants and rubberneckers from the field. Because the sane, or at least the untroubled, get on with their lives and are therefore less profitable to the tech overseers and their uber-capitalist plantation owners who want to enslave our eyeballs?

And if so, should coders be considered in loco parentis and held legally responsible for the algorithm's crimes of inciting violence? Are there grounds for a class-action lawsuit against these bad actors that could bring them to heel? Would such an action be wise?

Be aware that there is already precedent. Twenty-five survivors of the racist mass shooting in Buffalo committed by Payton Gendron have filed suit against Reddit and Youtube. Gendron anticipated this question in his manifesto:

Where did you get your current beliefs? 

Mostly from the internet. There was little to no influence on my personal beliefs by people I met in person. I read multiple sources of information from all ideologies and decided that my current one is most correct.

If social media's main product is its users' eyeballs, can it be prosecuted in civil or criminal court for producing and marketing a corrupted and dangerous product, e.g., terrorists? The lawsuit against Reddit and Youtube characterizes their algorithms as "defective products." Terrorism as a bug, not a feature.  But is it? You can read the article here on NPR.

I don't normally use this space as a political forum, and I don't intend to now, but I will take the soapbox in a heartbeat against my favorite nemesis of late: artificial intelligence, in this case the recommendation algorithms which govern the net, because I've learned a new term today: 

Algorithmic radicalization.

We've all been warned about walling ourselves off in online echo chambers, opinion bubbles. The warnings are always worded as if constructing these hives were a conscious decision that we make. After all, whether online or off, we tend to bond with the like-minded, don't we? 

But we're not really living in echo chambers, which bring to mind a cave where we're cozy and comfy, with our favorite pillow and some snacks, a static situation, unfortunate, perhaps, but understandable. That's a misnomer, and a dangerous one. Instead, recommendation algorithms have created echo tunnels, where we are walled in on every side, where we need to keep burrowing down to find the safety which social media promises but never actually affords us.

Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to their developing radicalized extremist political views. It doesn't happen to everyone, of course. Not every kid on the playground takes the dope offered by their friendly neighborhood dealer. But we have to consider those who do, and become addicted. Like any addiction, they need more dope to satisfy our cravings. And the dope in this case is likes.

A series of Wall Street Journal articles, based on documents fed to them by corporate whistleblowers, dubbed the Facebook Files, exposed the internal impetus of their tinkering with the algorithm: 

Engagement Uber Alles.

Keach Hagey outlined the soma in a WSJ podcast titled The Outrage Algorithm:

 "So that's like a really different way of filtering what you see. It's not based on what you would actually most like to see or what's most relevant to you or what's highest quality. It's what will get the most comments. And the result of that, it turns out that what gets the most comments is really divisive, outrageous stuff, especially stuff that provokes political anger."

Facebook unironically calls this "meaningful social interaction."

And what engages users, gets them hooked, even better than sex? Say it with me: 

Rage.

And what's the perfect fuel for rage? Misinformation. It engages those who spread it, those who believe it, and those who fight it. Three in one.

Oh, FB's Civic Team came up with ways to stop misinformation, stop it cold. They tested them. They worked. Zuckerberg vetoed them. In 2020 the Civic Team was disbanded. The team's former head, Samidh Chakrabarti, tweeted this:

"When you treat all engagement equally (irrespective of content), increasing feed engagement will invariably amplify misinfo, sensationalism, hate, and other societal harms. I wish this weren't the case, but it is so predictable that it is perhaps a natural law of social networks."

If you'd like to hear the entire episode, go here.

And why a natural law

Because social media is zeroed in on one target.The algorithmic gold mine is the limbic brain, the tweenbrain, the amygdala, the same area that lights up when we keep jabbing that joystick in a video game, dopamine beating against the brain like rain, lulling it to sleep. The drumbeat of fight-or-flight dulling our ability to respond, desensitizing us to empathy and pain. 

According to Anna Lembke's book, Dopamine Nation: Finding Balance in the Age of Indulgence:

"Then there's novelty. Dopamine is triggered by our brain's search-and-explore functions, telling us, "Hey, pay attention to this, something new has come along." Add to that the artificial intelligence algorithms that learn what we've liked before and suggest new things that are similar but not exactly the same, and we're off and running."

Whenever a user receives a like, share, or a comment on a post, they get a hit of dopamine. And by strange coincidence those three things are the metrics for engagement on Facebook. Engagement equals ad dollars. So social media does everything it can to keep those dopamine hits coming.

Excessive levels of dopamine can lead to stress, anxiety, poor judgment, insomnia, aggression, even hallucinations.Which may lead to driving down Bourbon St. --on the sidewalk on a busy night. If you want a picture of the terrorist's accomplices, I'm able today to release one:

Stay vigilant.

Of course terror existed long before social media. But its embrace throughout most of history was voluntary, more or less. It is now being forced upon its practitioners all unaware, all in pursuit of the monetization of attention. Brain-washing techniques suitable for fashioning suicide bombers have been adapted by recommendation algorithms to drive up advertising dollars. Musk and Zuck are cashing in on terrorism.

(And now Zuck has fired his fact-checkers in the name of "free speech": the Musk Model. Bring on the misinformation. Ka-ching!)

(And now they've found evidence that Matthew Livelsberger, the soldier who exploded the Tesla Cybertruck outside the Trump Hotel in Las Vegas had an accomplice--ChatGPT, which helped him in the planning. So,yes--the algorithm is the terrorist.)


No comments yet

So leave a comment already

Thanks a million!