They don’t have to, algorithms do whatever they are designed to do. Long division is an algorithm.
Profit motives are the issue here.
Algorithms [based on engagement]
Yeah, the narrowing of the word “algorithm” to only mean “social media recommendation algorithms” is getting on my nerves.
It’s the only time normies encounter the word.
How do you feel about crypto?
Cryptography is pretty useful
At this point, “crypto” is its own word short for “cryptocurrency”, and not for “cryptography” in the broader sense. It’s unfortunate, but that’s how people use it.
That isn’t even generally true. Try mentioning crypto on the LKML and see what they think you mean.
Scam from start to finish
I don’t think Superman needs a dog.
I’ve always thought it was so funny when people say tHe aLgOrItHM like it’s a bad word or something. I know they mean social media & marketing, but it’s funny to think that they’re very concerned about something like bubble sort.
Wasn’t this literally the shady research that Facebook got caught doing with Cambridge Analytica? Specifically tweaking a user’s feed to be more negative resulted in that user posting more negative things themselves and more engagement overall.
I wonder exactly how much of Hawaii Zuckerberg has to own before people start to question what they are getting from facebook.
Yep!
Facebook figured out how to monetize trolling.
Over 10 years later, it’s destroyed society, but made them a lot of money.
The old thread I posted this in was deleted, but I wrote this:
Okay so hear me out. I have this pet theory that might explain some of the divide between genders, but also political parties, causing paralysis which ultimately might lead to humanity’s extinction. Forgive me if I’m stating the obvious.
I’m going to set up two axioms to arrive at an extrapolated conclusion.
One: Human psychology tends to ascribe more weight to negative things than positive things in the short term. In the long term this generally balances out, but in the short term it’s more prudent in a biological sense to pay attention to the rustling in the bushes than the berries you might pick from them. This is known as the negativity bias.
Two: The modern gatekeepers of social interaction, Big Tech, employ blind algorithms that attempt to steer your attention towards spending more time on their platforms. These companies are the arbiters of the content we experience daily and what you do and don’t see is mostly at their discretion. The techniques they employ, in simple terms, are designed to provoke what they call ‘engagement’. They do this because at the end of the day FAANG have not only a financial interest, but a fiduciary duty to sell advertisements at the behest of their shareholders. The more they can engage you, the more ads they can sell. They employ live A-B testing, divide people into cohorts and poke and prod them with psychological techniques to try and glue your eyeballs to their ads.
Extrapolated conclusion: These companies have a financial and legally binding interest to divide the population against itself, obstructing politics and social interaction to the point where we might not be able to achieve any of the goals that we need to reach to prevent oblivion.
Thank you for coming to my TED Talk.
I don’t even think this is controversial in any way, in fact I used to assume this was just common knowledge after Cambridge Analytica…
I deleted, as in permanently, totally deleted my FB presence when that came out… but everyone else I explained … basically what you’ve just explained … to, thought I was insane or overreacting and paranoid.
…
Its simple.
Engagement, usage, time on platform is being optimized for.
What drives these things most effectively?
Hatred, outrage, extremely offensive and divisive things.
…
… And they know that they can, through exposing people to such things, make said people more extreme and hateful and anxious and depressed.
So… from an ‘optimize for platform usage’ standpoint… perfect! It’s a reinforcing loop!
…
Zuckerberg stated at one point that his goal with Facebook was to be able to profile (and manipulate, but he didn’t say that part) users so well that he’d be able to predict what they’d post next.
He really did/does just view all social interaction as a very complex problem that can be ‘solved’, like a physics question can be solved, to make a predictive model.
They literally know that their business model is to ruin social discourse, ruin peoples mental health and their lives, to polarize society.
It should not be surprising in any way that, well now society is extremely polarized and mentally ill.
For a long time Facebook counted an angry react as equal to five likes for measuring engagement. It’s very much intentional.
The term “algorithm” in this context is simply a convenient term hiding the intentional right wing radicalization of users to push them towards pro-business policies, so can we please call this out more often?
I’m quite tired of “algorithm” standing in for the intentions behind the owners who write and maintain it.
It was also an “algorithm” that inflated rent around the country, right?
An algorithm, yes. Written with the intention of inflating rent.
It’s not an accident. Algorithm my hair-hole
deleted by creator
deleted by creator
What I don’t get about this is why in this day and age with all the analytics tools we have do companies continue to just happily pay for simple eyeball exposure?
The only time they seem to have any pause at all on this model is if people post screenshots of ads for their products next to posts literally praising Nazis.
These so called AIs (LLMs) can learn to tell the difference between positive/happy/uplifting posts, neutral posts, and angry/sad/disturbing posts. The advertisers should be asking for their products to be featured next to the first and second groups of posts.
People engage based on anger, sure. They click posts and reply and whatnot. But do they click the ad next to a post that pisses them off and then buy the product?
Or is this purely a subconscious intrusion effort? Do the advertisers just want their products in front of eyeballs regardless of what’s around the ad? It seems like the answer is “no” when they’re called out. But maybe it’s “yes” if they can get away with it?
I disagree with OP’s editorialized title.
As an avid video gamer, I find myself constantly encountering subtle and overt bigotry in most online games I play. I will always call them out for it, no matter how much whooping it incites from kids just eating their popcorn and enjoying the fight.
Ignoring them is how you let the Andrew Tates of the world win, because they’re certainly not taking the high road by remaining silent about their beliefs.
They did a study around the 2020 elections and have found the following to work with trolls:
Respond once with the facts (if you must), and then walk away. I have found Lemmy not needing that most of the time, just downvoting seems to work. But if you’re on the place that shall not be named, this works.
I wish lemmy had the feature where you can mute all replies to a comment.
Also very much appreciated if you can share the study it sounds interesting.
Yes, that and the tagging of users.
Also you can tag users.
Thanks, I didn’t realize that worked here. But I meant tagging a user as awesome or a troll or whatever. That way, when you kind of remember seeing the name and they seem like they’re trolling, I can tell right away if I’ve had previous interactions with them. RES was awesome for that.
Ah that makes sense. That would be nifty. Because my block list these days is based on “this person said fucked up shit and i vaguely remember them saying more fucked up shit a while ago but cant fully remember if its them”
Exaxtly that, I just don’t have the bandwidth.
With Boost for Lemmy you can tag people atleast.
Sorry I see you replied before I edited my comment. Don’t feel forced as it isn’t that important but if you have the original study handy I’d appreciate a link because it sounds interesting.
That was years ago. I’m sure I saved it on reddit, but I haven’t been back there since I switched. Sorry about that. It was a real study though, they were trying to figure out all of he social media trolling from Cambridge Analytica and all that. It might have been even earlier.
Based and reposted
I think the word is ragebait
Every social media company’s content algorithm should be open source or at least a government agency should have code enforcement
…as opposed to platforms like Lemmy, where the only political ideologies you’ll find are “leftists” who, when asked what they even believe, respond with “what are you, a cop?”
are you? 👮♂️ 👀
If you’re a cop, you have to tell me, man! Like legally, you can’t arrest me without telling me you’re a cop!
I’ve been participating in Threads (yeah, I know, should be ashamed) and I’m unfortunately a sucker for some of the ragebait, especially political.
Guess what Threads pushes at me. A lot of the dumbest ragebait. Not people that actually want to have a conversation. My fault for being a sucker, but the algorithms work.
Doesn’t really matter, I’m shadowbanned. Pissed off too many republican propagandists by refuting them, so as usual, the “report” button is their remedy.
Nobody likes Muhammad ibn Musa al-Khwarizmi anymore, I guess he had a good run since around 820 CE.