Delete Set public Set private Add tags Delete tags
  Add tag   Cancel
  Delete tag   Cancel
  • Cyberpunk is the way
  • My links
  • Tags
  • pics
  • Daily
  • RSS
  • Login
Filter untagged links
12 results tagged Enshitification

Here lies the internet, murdered by generative AIhttps://www.theintrinsicperspective.com/p/here-lies-the-internet-murdered-by

  • Artificial intelligence
  • Enshitification
  • Artificial intelligence
  • Enshitification

Here lies the internet, murdered by generative AI

Corruption everywhere, even in YouTube's kids content

Erik Hoel Feb 27, 2024

img

Art for The Intrinsic Perspective is by Alexander Naughton

The amount of AI-generated content is beginning to overwhelm the internet. Or maybe a better term is pollute. Pollute its searches, its pages, its feeds, everywhere you look. I’ve been predicting that generative AI would have pernicious effects on our culture since 2019, but now everyone can feel it. Back then I called it the coming “semantic apocalypse.” Well, the semantic apocalypse is here, and you’re being affected by it, even if you don’t know it. A minor personal example: last year I published a nonfiction book, The World Behind the World, and now on Amazon I find this.

img

What, exactly, are these “workbooks” for my book? AI pollution. Synthetic trash heaps floating in the online ocean. The authors aren’t real people, some asshole just fed the manuscript into an AI and didn’t check when it spit out nonsensical summaries. But it doesn’t matter, does it? A poor sod will click on the $9.99 purchase one day, and that’s all that’s needed for this scam to be profitable since the process is now entirely automatable and costs only a few cents. Pretty much all published authors are affected by similar scams, or will be soon.

Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump. Google search? They often lead with fake AI-generated images amid the real things. Post on Twitter? Get replies from bots selling porn. But that’s just the obvious stuff. Look closely at the replies to any trending tweet and you’ll find dozens of AI-written summaries in response, cheery Wikipedia-style repeats of the original post, all just to farm engagement. AI models on Instagram accumulate hundreds of thousands of subscribers and people openly shill their services for creating them. AI musicians fill up YouTube and Spotify. Scientific papers are being AI-generated. AI images mix into historical research. This isn’t mentioning the personal impact too: from now on, every single woman who is a public figure will have to deal with the fact that deepfake porn of her is likely to be made. That’s insane.

And rather than this being pure skullduggery, people and institutions are willing to embrace low-quality AI-generated content, trying to shift the Overton window to make things like this acceptable:

img

source

That’s not hardball capitalism. That’s polluting our culture for your own minor profit. It’s not morally legitimate for the exact same reasons that polluting a river for a competitive edge is not legitimate. Yet name-brand media outlets are embracing generative AI just like SEO-spammers are, for the same reasons.

E.g., investigative work at Futurism caught Sports Illustrated red-handed using AI-generated articles written by fake writers. Meet Drew Ortiz.

img

He doesn’t exist. That face is an AI-generated portrait, which was previously listed for sale on a website. As Futurism describes:

Ortiz isn't the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content…

"At the bottom [of the page] there would be a photo of a person and some fake description of them like, 'oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.' Stuff like that," they continued. "It's just crazy."

This isn’t what everyone feared, which is AI replacing humans by being better—it’s replacing them because AI is so much cheaper. Sports Illustrated was not producing human-quality level content with these methods, but it was still profitable.

The AI authors' writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball "can be a little tricky to get into, especially without an actual ball to practice with."

Sports Illustrated, in a classy move, deleted all the evidence. Drew was replace by Sora Tanaka, bearing a face also listed for sale on the same website with the description of a “joyful asian young-adult female with long brown hair and brown eyes.”

img

Given that even prestigious outlets like The Guardian refuse to put any clear limits on their use of AI, if you notice odd turns of phrase or low-quality articles, the likelihood that they’re written by an AI, or with AI-assistance, is now high.

Sadly, the people affected the most by generative AI are the ones who can’t defend themselves. Because they don’t even know what AI is. Yet we’ve abandoned them to swim in polluted information currents. I’m talking, unfortunately, about toddlers. Because let me introduce you to…

the hell that is AI-generated children’s YouTube content.

YouTube for kids is quickly becoming a stream of synthetic content. Much of it now consists of wooden digital characters interacting in short nonsensical clips without continuity or purpose. Toddlers are forced to sit and watch this runoff because no one is paying attention. And the toddlers themselves can’t discern that characters come and go and that the plots don’t make sense and that it’s all just incoherent dream-slop. The titles don’t match the actual content, and titles that are all the parents likely check, because they grew up in a culture where if a YouTube video said BABY LEARNING VIDEOS and had a million views it was likely okay. Now, some of the nonsense AI-generated videos aimed at toddlers have tens of millions of views.

Here’s a behind-the-scenes video on a single channel that made 1.2 million dollars via AI-generated “educational content” aimed at toddlers.

As the video says:

These kids, when they watch these kind of videos, they watch them over and over and over again.

They aren’t confessing. They’re bragging. And the particular channel they focus on isn’t even the worst offender—at least that channel’s content mostly matches the subheadings and titles, even if the videos are jerky, strange, off-putting, repetitious, clearly inhuman. Other channels, which are also obviously AI-generated, get worse and worse. Here’s a “kid’s education” channel that is AI-generated (took about one minute to find) with 11.7 million subscribers.

They don’t use proper English, and after quickly going through some shapes like the initial video title promises (albeit doing it in a way that makes you feel like you’re going insane) the rest of the video devolves into randomly-generated rote tasks, eerie interactions, more incorrect grammar, and uncanny musical interludes of songs that serve no purpose but to pad the time. It is the creation of an alien mind.

Here’s an example of the next frontier: completely start-to-finish AI-generated music videos for toddlers. Below is a how-to video for these new techniques. The result? Nightmarish parrots with twisted double-beaks and four mutated eyes singing artificial howls from beyond. Click and behold (or don’t, if you want to sleep tonight).

All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume. There’s no other word but dystopian. Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs? We’re conducting this experiment live. For the first time in history developing brains are being fed choppy low-grade and cheaply-produced synthetic data created en masse by generative AI, instead of being fed with real human culture. No one knows the effects, and no one appears to care. Especially not the companies, because…

OpenAI has happily allowed pollution.

Why blame them, specifically? Well, first of all, their massive impact—e.g., most of the kids videos are built from scripts generated by ChatGPT. And more generally, what AI capabilities are considered okay to deploy has long been a standard set by OpenAI. Despite their supposed safety focus, OpenAI failed to foresee that its creations would thoroughly pollute the internet across all platforms and services. You can see this failure in how they assessed potential negative outcomes in the announcement of GPT-2 on their blog, back in 2019. While they did warn that these models could have serious longterm consequences for the information ecosystem, the specifics they were concerned with were things like:

Generate misleading news articles

Impersonate others online

Automate the production of abusive or faked content to post on social media

Automate the production of spam/phishing content

This may sound kind of in line with what’s happened, but if you read further, it becomes clear that what they meant by “faked content” was mainly malicious actors promoting misinformation, or the same shadowy malicious actors using AI to phish for passwords, etc.

These turned out to be only minor concerns compared to AI’s cultural pollution. OpenAI kept talking about “actors” when they should have been talking about “users.” Because it turns out, all AI-generated content is fake! Or it’s all kind of fake. AI-written websites, now sprouting up like an unstoppable invasive species, don’t necessarily have an intent to mislead; it’s just that AI content is low-effort banalities generated for pennies, so you can SEO spam and do all sorts of manipulative games around search to attract eyeballs and ad revenue.

That is, the OpenAI team didn’t stop to think that regular users just generating mounds of AI-generated content on the internet would have very similar negative effects to as if there were a lot of malicious use by intentional bad actors. Because there’s no clear distinction! The fact that OpenAI was both honestly worried about negative effects, and at the same time didn’t predict the enshittification of the internet they spearheaded, should make us extremely worried they will continue to miss the negative downstream effects of their increasingly intelligent models. They failed to foresee the floating mounds of clickbait garbage, the synthetic info-trash cities, all to collect clicks and eyeballs—even from innocent children who don’t know any better. And they won’t do anything to stop it, because…

AI pollution is a tragedy of the commons.

This term, "tragedy of the commons,” originated in the rising environmentalism of the 20th century, and would lead to many of the regulations that keep our cities free of smog and our rivers clean. Garrett Hardin, an ecologist and biologist, coined it in an article in [Science](https://math.uchicago.edu/~shmuel/Modeling/Hardin, Tragedy of the Commons.pdf) in 1968. The article is still instructively relevant. Hardin wrote:

An implicit and almost universal assumption of discussions published in professional and semipopular scientific journals is that the problem under discussion has a technical solution…

He goes on to discuss several problems for which there are no technical solutions, since rational actors will drive the system toward destruction via competition:

The tragedy of the commons develops in this way. Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

One central example of Hardin’s became instrumental to the environmental movement.

… the tragedy of the commons reappears in problems of pollution. Here it is not a question of taking something out of the commons, but of putting something in—sewage, or chemical, radioactive, and heat wastes into water; noxious and dangerous fumes into the air; and distracting and unpleasant advertising signs into the line of sight. The calculations of utility are much the same as before. The rational man finds that his share of the cost of the wastes he discharges into the commons is less than the cost of purifying his wastes before releasing them. Since this is true for everyone, we are locked into a system of "fouling our own nest," so long as we behave only as independent, rational, free-enterprisers.

We are currently fouling our own nests. Since the internet economy runs on eyeballs and clicks the new ability of anyone, anywhere, to easily generate infinite low-quality content via AI is now remorselessly generating tragedy.

The solution, as Hardin noted, isn’t technical. You can’t detect AI outputs reliably anyway (another initial promise that OpenAI abandoned). The companies won’t self regulate, given their massive financial incentives. We need the equivalent of a Clean Air Act: a Clean Internet Act. We can’t just sit by and let human culture end up buried.

Luckily we’re on the cusp of all that incredibly futuristic technology promised by AI. Any day now, our GDP will start to rocket forward. In fact, soon we’ll cure all disease, even aging itself, and have robot butlers and Universal Basic Income and high-definition personalized entertainment. Who cares if toddlers had to watch inhuman runoff for a few billion years of viewing-time to make the future happen? It was all worth it. Right? Let’s wait a little bit longer. If we wait just a little longer utopia will surely come.

Permalink
June 20, 2024 at 11:26:04 PM GMT+2

Why the Internet Isn’t Fun Anymore | The New Yorkerhttps://www.newyorker.com/culture/infinite-scroll/why-the-internet-isnt-fun-anymore

  • Enshitification
  • Social Network
  • Enshitification
  • Social Network

Why the Internet Isn’t Fun Anymore

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over.

By Kyle Chayka October 9, 2023

https://media.newyorker.com/photos/652058958ddd02a889a30583/master/w_2240,c_limit/Chayka_Internet_still.jpg

Lately on X, the platform formerly known as Twitter, my timeline is filled with vapid posts orbiting the same few topics like water whirlpooling down a drain. Last week, for instance, the chatter was dominated by talk of Taylor Swift’s romance with the football player Travis Kelce. If you tried to talk about anything else, the platform’s algorithmic feed seemed to sweep you into irrelevance. Users who pay for Elon Musk’s blue-check verification system now dominate the platform, often with far-right-wing commentary and outright disinformation; Musk rewards these users monetarily based on the engagement that their posts drive, regardless of their veracity. The decay of the system is apparent in the spread of fake news and mislabelled videos related to Hamas’s attack on Israel.

Elsewhere online, things are similarly bleak. Instagram’s feed pushes months-old posts and product ads instead of photos from friends. Google search is cluttered with junky results, and S.E.O. hackers have ruined the trick of adding “Reddit” to searches to find human-generated answers. Meanwhile, Facebook’s parent company, Meta, in its latest bid for relevance, is reportedly developing artificial-intelligence chatbots with various “sassy” personalities that will be added to its apps, including a role-playing D. & D. Dungeon Master based on Snoop Dogg. The prospect of interacting with such a character sounds about as appealing as texting with one of those spam bots that asks you if they have the right number.

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over. The precipitous decline of X is the bellwether for a new era of the Internet that simply feels less fun than it used to be. Remember having fun online? It meant stumbling onto a Web site you’d never imagined existed, receiving a meme you hadn’t already seen regurgitated a dozen times, and maybe even playing a little video game in your browser. These experiences don’t seem as readily available now as they were a decade ago. In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems. When those platforms decay, as Twitter has under Elon Musk, there is no other comparable platform in the ecosystem to replace them. A few alternative sites, including Bluesky and Discord, have sought to absorb disaffected Twitter users. But like sproutlings on the rain-forest floor, blocked by the canopy, online spaces that offer fresh experiences lack much room to grow.

One Twitter friend told me, of the platform’s current condition, “I’ve actually experienced quite a lot of grief over it.” It may seem strange to feel such wistfulness about a site that users habitually referred to as a “hellsite.” But I’ve heard the same from many others who once considered Twitter, for all its shortcomings, a vital social landscape. Some of them still tweet regularly, but their messages are less likely to surface in my Swift-heavy feed. Musk recently tweeted that the company’s algorithm “tries to optimize time spent on X” by, say, boosting reply chains and downplaying links that might send people away from the platform. The new paradigm benefits tech-industry “thread guys,” prompt posts in the “what’s your favorite Marvel movie” vein, and single-topic commentators like Derek Guy, who tweets endlessly about menswear. Algorithmic recommendations make already popular accounts and subjects even more so, shutting out the smaller, more magpie-ish voices that made the old version of Twitter such a lively destination. (Guy, meanwhile, has received so much algorithmic promotion under Musk that he accumulated more than half a million followers.)

The Internet today feels emptier, like an echoing hallway, even as it is filled with more content than ever. It also feels less casually informative. Twitter in its heyday was a source of real-time information, the first place to catch wind of developments that only later were reported in the press. Blog posts and TV news channels aggregated tweets to demonstrate prevailing cultural trends or debates. Today, they do the same with TikTok posts—see the many local-news reports of dangerous and possibly fake “TikTok trends”—but the TikTok feed actively dampens news and political content, in part because its parent company is beholden to the Chinese government’s censorship policies. Instead, the app pushes us to scroll through another dozen videos of cooking demonstrations or funny animals. In the guise of fostering social community and user-generated creativity, it impedes direct interaction and discovery.

According to Eleanor Stern, a TikTok video essayist with nearly a hundred thousand followers, part of the problem is that social media is more hierarchical than it used to be. “There’s this divide that wasn’t there before, between audiences and creators,” Stern said. The platforms that have the most traction with young users today—YouTube, TikTok, and Twitch—function like broadcast stations, with one creator posting a video for her millions of followers; what the followers have to say to one another doesn’t matter the way it did on the old Facebook or Twitter. Social media “used to be more of a place for conversation and reciprocity,” Stern said. Now conversation isn’t strictly necessary, only watching and listening.

Posting on social media might be a less casual act these days, as well, because we’ve seen the ramifications of blurring the border between physical and digital lives. Instagram ushered in the age of self-commodification online—it was the platform of the selfie—but TikTok and Twitch have turbocharged it. Selfies are no longer enough; video-based platforms showcase your body, your speech and mannerisms, and the room you’re in, perhaps even in real time. Everyone is forced to perform the role of an influencer. The barrier to entry is higher and the pressure to conform stronger. It’s no surprise, in this environment, that fewer people take the risk of posting and more settle into roles as passive consumers.

The patterns of life offscreen affect the makeup of the digital world, too. Having fun online was something that we used to do while idling in office jobs: stuck in front of computers all day, we had to find something on our screens to fill the down time. An earlier generation of blogs such as the Awl and Gawker seemed designed for aimless Internet surfing, delivering intermittent gossip, amusing videos, and personal essays curated by editors with quirky and individuated tastes. (When the Awl closed, in 2017, Jia Tolentino lamented the demise of “online freedom and fun.”) Now, in the aftermath of the pandemic, amid ongoing work-from-home policies, office workers are less tethered to their computers, and perhaps thus less inclined to chase likes on social media. They can walk away from their desks and take care of their children, walk their dog, or put their laundry in. This might have a salutary effect on individuals, but it means that fewer Internet-obsessed people are furiously creating posts for the rest of us to consume. The user growth rate of social platforms over all has slowed over the past several years; according to one estimate, it is down to 2.4 per cent in 2023.

That earlier generation of blogs once performed the task of aggregating news and stories from across the Internet. For a while, it seemed as though social-media feeds could fulfill that same function. Now it’s clear that the tech companies have little interest in directing users to material outside of their feeds. According to Axios, the top news and media sites have seen “organic referrals” from social media drop by more than half over the past three years. As of last week, X no longer displays the headlines for articles that users link to. The decline in referral traffic disrupts media business models, further degrading the quality of original content online. The proliferation of cheap, instant A.I.-generated content promises to make the problem worse.

Choire Sicha, the co-founder of the Awl and now an editor at New York, told me that he traces the seeds of social media’s degradation back a decade. “If I had a time machine I’d go back and assassinate 2014,” he said. That was the year of viral phenomena such as Gamergate, when a digital mob of disaffected video-game fans targeted journalists and game developers on social media; Ellen DeGeneres’s selfie with a gaggle of celebrities at the Oscars, which got retweeted millions of times; and the brief, wondrous fame of Alex, a random teen retail worker from Texas who won attention for his boy-next-door appearance. In those events, we can see some of the nascent forces that would solidify in subsequent years: the tyranny of the loudest voices; the entrenchment of traditional fame on new platforms; the looming emptiness of the content that gets most furiously shared and promoted. But at that point they still seemed like exceptions rather than the rule.

I have been trying to recall the times I’ve had fun online unencumbered by anonymous trolling, automated recommendations, or runaway monetization schemes. It was a long time ago, before social networks became the dominant highways of the Internet. What comes to mind is a Web site called Orisinal that hosted games made with Flash, the late interactive animation software that formed a significant part of the kitschy Internet of the two-thousands, before everyone began posting into the same platform content holes. The games on the site were cartoonish, cute, and pastel-colored, involving activities like controlling a rabbit jumping on stars into the sky or helping mice make a cup of tea. Orisinal was there for anyone to stumble upon, without the distraction of follower counts or sponsored content. You could e-mail the site to a friend, but otherwise there was nothing to share. That old version of the Internet is still there, but it’s been eclipsed by the modes of engagement that the social networks have incentivized. Through Reddit, I recently dug up an emulator of all the Orisinal games and quickly got absorbed into one involving assisting deer leaping across a woodland gap. My only reward was a personal high score. But it was more satisfying, and less lonely, than the experience these days on X. ♦

Permalink
June 20, 2024 at 11:16:09 PM GMT+2

L’effondrement de l’information ? | Hubert Guillaudhttps://hubertguillaud.wordpress.com/2024/01/11/leffondrement-de-linformation/

  • Social Network
  • Digital Society
  • Press
  • Enshitification
  • Social Network
  • Digital Society
  • Press
  • Enshitification

L’effondrement de l’information ?

img

Depuis Cambridge Analytica, Trump, le Brexit et le Covid, l’information est devenue un problème pour les réseaux sociaux… Sommés par les autorités d’arbitrer la vérité, la plupart d’entre eux semblent désormais se réfugier en-dehors de l’information, pour devenir des lieux d’accomplissement de soi rétifs à la politique. C’est certainement ce qui explique le recul de l’information dans les flux des utilisateurs, analyse pertinemment Charlie Warzel pour The Atlantic. Comme le déclarait récemment le New York Times : « Les principales plateformes en ligne sont en train de rompre avec l’information ».

Les plateformes de réseaux sociaux ont longtemps influencé la distribution de l’information, par exemple, en poussant les médias à se tourner vers la vidéo, comme l’a fait Facebook en 2015, en surestimant volontairement le temps moyen que les utilisateurs passaient à regarder des vidéos pour pousser les médias à basculer vers la production de contenus vidéos. Aujourd’hui, elles se détournent de l’information pour le divertissement et la publicité. Mais il n’y a pas qu’elles, les lecteurs eux-mêmes semblent atteindre un plafond informationnel, qui les pousse à se détourner de l’info, rapporte le Pew Research Center. La consommation d’information, particulièrement anxiogène, a plongé depuis 2020. Beaucoup se sont tournés vers des contenus plus faciles, comme ceux produits par les influenceurs. “La confiance des consommateurs ne repose pas nécessairement sur la qualité du reportage ou sur le prestige et l’histoire de la marque, mais sur des relations parasociales fortes”, constate Warzel. En 2014 – l’époque faste de l’actualité sociale – 75 % des adultes américains interrogés par le Pew déclaraient qu’Internet et les médias sociaux les avaient aidés à se sentir plus informés. Ce n’est plus le cas.

Avec l’accélération algorithmique de l’information dans les réseaux sociaux, les cycles d’actualité sont devenus plus rapides : Twitter est ainsi devenu le rédacteur en chef des sujets les plus chauds que les médias devaient traiter, dans une boucle de renforcement des sujets populaires, à l’image des tweets de Donald Trump que tous les médias commentaient. De 2013 à 2017, l’actualité est devenue l’essence faisant tourner les réseaux sociaux, transformant peu à peu l’information en champ de bataille… Beaucoup d’utilisateurs s’en sont alors détournés. De nouveaux réseaux sociaux ont explosé, à l’image de TikTok et les plus anciens réseaux se sont adaptés, Facebook notamment… Une récente enquête de Morning Consult a montré que « les gens aimaient davantage Facebook maintenant qu’il y avait moins d’actualité ».

Les commentaires sur l’actualité comme l’information ne vont pas entièrement disparaître, estime Warzel, mais les médias viennent de perdre de leur influence culturelle. Pour John Herrman dans le New Yorker, la campagne présidentielle de 2024 aux Etats-Unis risque d’être la première sans médias pour façonner les grands récits politiques. “Les réseaux sociaux ont fait ressortir le pire dans le secteur de l’information, et les informations, à leur tour, ont fait ressortir le pire dans de nombreux réseaux sociaux”. L’alliance entre réseaux sociaux et information a vécu. Reste à savoir ce que le monde de l’influence va produire… dans un monde où la force de l’écrit et la structuration de l’information semblent s’estomper du fait de machines à recommandation qui ne sont plus bâties pour eux.

La fin d’un monde commun
Dans un second article, Warzel revient sur cette disparition de l’information… Pour lui, l’internet est désormais fragmenté par les recommandations sociales qui font que nous ne partageons pas grand-chose de ce que les autres consomment. “La notion même de popularité est sujette à débat” : plus personne ne sait vraiment si telle tendance est aussi virale qu’affichée. Difficultés à comparer les métriques, recommandations opaques, fermeture des sites d’information par les paywalls, chute de la pertinence des informations sur les médias sociaux et envahissement publicitaire… Nous ne comprenons plus ce qu’il se passe en ligne. Vous n’avez probablement jamais vu les vidéos les plus populaires de TikTok de l’année, pas plus que les contenus les plus vus de Facebook ! Et pas grand monde n’avait parlé de l’émission la plus populaire de Netflix, The Night Agent ! D’un côté, les contenus populaires sont plus viraux que jamais, de l’autre ces popularités sont plus cloisonnées que jamais ! Les comparaisons d’audience entre contenus et plateformes deviennent particulièrement complexes à décoder. Par exemple, la polémique récente sur le succès d’audience auprès de jeunes américains d’un discours de Ben Laden n’a pas été aussi virale que beaucoup l’ont dit, comme l’ont démontré le Washington Post ou Ryan Broderick. Un peu comme si nous étions entrés dans un moment de grande confusion sur la viralité, avec des métriques de vues que l’on compare d’une plateforme l’autre, alors que leurs publics et principes d’auto-renforcement sont très différents. Le fait que les plateformes ferment l’accès à leurs métriques et à la recherche n’aide pas à y voir clair, bien sûr. Sans échelle de comparaison, sans moyens pour voir ce qui circule et comment, nous devenons aveugles à tous les phénomènes. Et notamment à l’un d’entre eux : la manipulation de l’information par des puissances étrangères…

img

Ces transformations ne sont pas encore achevées ni digérées qu’une autre se profile, estimait James Vincent pour The Verge : “l’ancien web est en train de mourir et le nouveau web a du mal à naître”. La production de textes, d’images, de vidéos et de sons synthétiques vient parasiter cet écosystème en recomposition. Accessibles directement depuis les moteurs de recherches, les productions de l’IA viennent remplacer le trafic qui menait jusqu’à l’information. “L’IA vise à produire du contenu bon marché depuis le travail d’autrui”. Bing AI ou Bard de Google pourraient finalement venir tuer l’écosystème qui a fait la valeur des moteurs de recherche, en proposant eux-même leur propre “abondance artificielle”. Certes, ce ne sera pas la première fois que l’écosystème de l’information se modifie : Wikipédia a bien tué l’Encyclopédie Britannica. Mais, pour James Vincent, si depuis l’origine le web structure la grande bataille de l’information en modifiant les producteurs, les modalités d’accès et les modèles économiques… cette nouvelle configuration qui s’annonce ne garantit pas que le système qui arrive soit meilleur que celui que nous avions.

“L’internet n’est plus amusant”, déplorait Kyle Chayka pour le New Yorker. A force d’ajustements algorithmiques, les réseaux sociaux sont devenus parfaitement chiants !, expliquait Marie Turcan de Numérama, dénonçant le web de l’ennui ! L’invisibilisation des liens externes et plus encore de l’écrit par rapport à la vidéo, semble achever ce qu’il restait de qualité, comme le rapporte David-Julien Rahmil pour l’ADN. Dans un autre article, Rahmil rappelle que les échanges directs ont pris le pas sur les échanges publics : “La publicité omniprésente, l’exacerbation des tensions politiques, la culture du clash perpétuel et la sensation de burn-out informationnel ont sans doute précipité la chute des grandes plateformes sociales.” Désormais, chaque plateforme ne travaille plus que pour elle-même. Dans une internet plus fragmenté que jamais, chaque plateforme va faire émerger ses propres professionnels, ses propres influenceurs et il est bien probable qu’ils ne se recoupent plus d’une plateforme l’autre.

img

Quant aux réseaux sociaux, ils se sont dévalorisés eux-mêmes, à l’image de Twitter, qui a longtemps incarné le fil d’actualité en temps réel, le lieu central d’une conversation influente et un peu élitiste, explique Nilay Patel pour The Verge. C’est “l’effondrement du contexte qui a rendu Twitter si dangereux et si réducteur, mais c’était aussi ce qui le rendait passionnant”. La plateforme a rendu ses utilisateurs plus rapides et plus agiles, mais également trop réactifs. Les marques se sont éloignées des médias pour gérer elles-mêmes leurs présences sociales. “En prenant du recul maintenant, vous pouvez voir exactement à quel point cette situation a été destructrice pour le journalisme : les journalistes du monde entier ont fourni gratuitement à Twitter des informations et des commentaires en temps réel, apprenant de plus en plus à façonner des histoires pour l’algorithme plutôt que pour leurs véritables lecteurs. Pendant ce temps, les sociétés de médias pour lesquelles ils travaillaient étaient confrontées à un exode de leurs plus gros clients publicitaires vers des plateformes sociales offrant des produits publicitaires de meilleure qualité et plus intégrés, une connexion directe avec le public et aucune éthique éditoriale contraignante. Les informations sont devenues de plus en plus petites, même si les histoires ont pris de l’ampleur.” Tout le monde y était journaliste, alors que le secteur de l’information lui-même se tarissait. “Twitter a été fondé en 2006. Depuis cette année-là, l’emploi dans les journaux a chuté de 70% et les habitants de plus de la moitié des comtés américains ont peu ou plus d’informations locales”. Avec la pandémie, Trump, Black Live Matter, Twitter a atteint un point de bascule, s’effondrant sous son propre pouvoir. L’audience a commencé à refluer sous sa toxicité. Pour Patel, la prise de pouvoir de Musk sur la plateforme est une réaction au recul du pouvoir des célébrités et des gens de la tech. En renforçant sa viralité et sa toxicité, la plateforme ne cesse de péricliter. Les challengers (Bluesky, Threads, Mastodon…) sont à Twitter “ce que la méthadone est à l’héroïne”. L’audience est plus fragmentée que jamais. A l’image de ces utilisateurs qui courent encore d’une plateforme l’autre pour envoyer des messages à leurs relations… ou ces lecteurs désorientés de ne plus trouver quoi lire.

Changement générationel ou enjunkification ?**
**L’âge de la conversation qui ouvrait le web du XXIe siècle est clos ! Et ce qu’il reste de nos conversations vont être prises en charge par des agents conversationnels… qui seront des des agents politiques et idéologiques bien plus efficaces que nos semblables, comme l’explique Olivier Ertzscheid ! A terme, c’est même une relation encore plus personnelle à l’information que dessinent les chatbots, chacun discutant avec le sien sans plus vraiment avoir de liens à des contenus communs.

img

Pour Max Read, dans le New York Times, peut-être faut-il lire ces changements en cours autrement. Ces transformations ont aussi des origines économiques, rappelle-t-il trop rapidement. “La fin de l’ère des taux d’intérêt bas a bouleversé l’économie des start-ups, mettant fin aux pratiques de croissance rapide comme le blitzscaling et réduisant le nombre de nouvelles entreprises Internet en lice pour attirer notre attention ; des entreprises comme Alphabet et Facebook sont désormais des entreprises matures et dominantes au lieu de nouvelles entreprises perturbatrices”… Pourtant, plutôt que de creuser cette explication économique, c’est à une autre explication que Max Read se range. Si l’internet est en train de mourir, c’est d’abord parce que nous vieillissons. La forme et la culture d’internet ont été façonnés par les préférences des générations qui y ont pris part. L’internet d’aujourd’hui n’est plus celui des médias sociaux (2000-2010), ni celui des réseaux sociaux (2010-2020). “Selon le cabinet d’études de consommation GWI, le temps passé devant un écran par les millennials est en baisse constante depuis des années. Seuls 42 % des 30 à 49 ans déclarent être en ligne « presque constamment », contre 49 % des 18 à 29 ans. Nous ne sommes même plus les premiers à l’adopter : les 18 à 29 ans sont plus susceptibles d’avoir utilisé ChatGPT que les 30 à 49 ans – mais peut-être uniquement parce que nous n’avons plus de devoirs à faire.”

“Le public américain le plus engagé sur Internet ne sont plus les millennials mais nos successeurs de la génération Z. Si Internet n’est plus amusant pour les millennials, c’est peut-être simplement parce que ce n’est plus notre Internet. Il appartient désormais aux zoomers.”

Les formats, les célébrités, le langage lui-même de cette génération est totalement différent, explique Read. “Les zoomers et les adolescents de la génération Alpha qui mordillent leurs talons générationnels semblent toujours s’amuser en ligne. Même si je trouve tout cela impénétrable et un peu irritant, l’expression créative et la socialité exubérante qui ont rendu Internet si amusant pour moi il y a dix ans sont en plein essor parmi les jeunes de 20 ans sur TikTok, Instagram, Discord, Twitch et même X. Skibidi Toilet, Taxe Fanum, le rizzler – je ne me rabaisserai pas en prétendant savoir ce que sont ces mèmes, ou quel est leur attrait, mais je sais que les zoomers semblent les aimer. Ou, en tout cas, je peux vérifier qu’ils adorent les utiliser pour confondre et aliéner les millennials d’âge moyen comme moi.”

Certes, ils sont récupérés et exploités par une petite poignée de plateformes puissantes, mais d’autres avant elles ont cherché à arbitrer et à marchandiser notre activité en ligne… “Les plateformes axées sur l’engagement ont toujours cultivé les influenceurs, les abus et la désinformation. Lorsque vous approfondissez, ce qui semble avoir changé sur le Web au cours des dernières années, ce n’est pas la dynamique structurelle mais les signifiants culturels”.

“En d’autres termes, l’enjunkification a toujours eu lieu sur le web commercial, dont le modèle économique largement basé sur la publicité semble imposer une course toujours mouvante vers le bas. Peut-être que ce que les internautes frustrés, aliénés et vieillissants comme moi vivent ici, ce ne sont pas seulement les fruits d’un Internet enjunkifié, mais aussi la perte de l’élasticité cognitive, du sens de l’humour et de l’abondance de temps libre nécessaire pour naviguer avec agilité et gaieté dans tous ces déchets déroutants.”

Mais c’est là une vision très pessimiste des transformations actuelles. Pour Rolling Stone, Anil Dash s’enthousiasme. Avec sa fragmentation, l’internet est en train de redevenir bizarre, comme il l’était à l’origine ! La disparition d’applications centrales (même si ce n’est pas vraiment tout à fait le cas), promet un retour de services étranges et de propositions inattendues à l’image de l’école de la programmation poétique de Neta Bomani… ou celles du constructeur de bots Stephan Bohacek, ou encore celles du designer Elan Kiderman Ullendorff qui s’amuse à faire des propositions pour “échapper aux algorithmes“… ou encore les petites subversions de l’artiste et programmeur Darius Kazemi qui proposait aux gens de créer leurs micro-réseaux sociaux autonomes sur Mastodon…

Pas sûr que ces subversions n’aient jamais cessé. Elles ont surtout été invisibilisées par les grandes plateformes sociales. Pas sûr que l’audience d’influence et que l’audience synthétique qui s’annoncent ne leur apporte plus d’espaces qu’ils n’en avaient hier. Reste qu’Anil Dash a raison : la seule chose certaine, c’est que les contenus les plus étranges vont continuer de tenter de parvenir jusqu’à nous. A l’image des vidéos qui venaient coloniser les flux des plus jeunes depuis quelques mots clefs, que dénonçait James Bridle dans son excellent livre, Un nouvel âge des ténèbres. Elan Kiderman Ullendorff s’est amusé à créer un compte tiktok des vidéos les plus repoussantes qui lui étaient proposées en passant toutes celles qui l’intéressaient et en ne retenant que le pire. Des vidéos qui semblent composer un portrait de Dorian Gray de chacun d’entre nous. Le web addictif est le miroir du web répulsif, le web qu’on déteste le miroir du web de nos rêves. Seule certitude, oui : le web de demain risque d’être bien plus étrange et dérangeant qu’il n’est ! Les ajustements algorithmiques ayant sabré le plus intéressant, il est probable que nous soyons plus que jamais confrontés au pire !

Hubert Guillaud

Permalink
June 20, 2024 at 10:36:54 PM GMT+2

Pivot to AI: Pay no attention to the man behind the curtain – Amy Castorhttps://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

  • Artificial intelligence
  • Crypto Currency
  • Enshitification
  • Artificial intelligence
  • Crypto Currency
  • Enshitification

Pivot to AI: Pay no attention to the man behind the curtain

hal9000

By Amy Castor and David Gerard the September 12, 2023 for AmyCastor

“all this talk of AI xrisk has the stink of marketing too. Ronald McDonald telling people that he has a bunker in New Zealand because the new burger they’re developing in R&D might be so delicious society will crumble.”

— Chris Martin

Crypto’s being dull again — but thankfully, AI has been dull too. The shine is coming off. So we’re back on the AI beat.

The AI winter will be privatized

Since the buzzword “artificial intelligence” was coined in the 1950s, AI has gone through several boom and bust cycles.

A new technological approach looks interesting and gets a few results. It gets ridiculously hyped up and lands funding. The tech turns out to be not so great, so the funding gets cut. The down cycles are called AI winters.

Past AI booms were funded mainly by the US Department of Defense. But the current AI boom has been almost completely funded by venture capital.

The VCs who spent 2021 and 2022 pouring money into crypto startups are pivoting to AI startups, because people buy the idea that AI will change the world. In the first half of 2023, VCs invested more than $40 billion into AI startups, and $11 billion just in May 2023. This is even as overall VC funding for startups dropped by half in the same period from the year before. [Reuters; Washington Post]

The entire NASDAQ is being propped up by AI. It’s one of the only fields that is still hiring.

In contrast, the DOD only requested $1.8 billion for AI funding in its 2024 budget. [DefenseScoop]

So why are VCs pouring money into AI?

Venture capital is professional gambling. VCs are looking for a liquidity event. One big winner can pay for a lot of failures.

Finding someone to buy a startup you’ve funded takes marketing and hype. The company doing anything useful, or anything that even works, is optional.

What’s the exit plan for AI VCs? Where’s the liquidity event? Do they just hope the startups they fund will do an initial public offering or just get acquired by a tech giant before the market realizes AI is running out of steam?

We’re largely talking about startups whose business model is sending queries to OpenAI.

At least with “Web3,” the VCs would just dump altcoins on retail investors via their very good friends at Coinbase. But with AI, we can’t see an obvious exit strategy beyond finding a greater fool.

Pay no attention to the man behind the curtain

The magical claim of machine learning is that if you give the computer data, the computer will work out the relations in the data all by itself. Amazing!

In practice, everything in machine learning is incredibly hand-tweaked. Before AI can find patterns in data, all that data has to be tagged, and output that might embarrass the company needs to be filtered.

Commercial AI runs on underpaid workers in English-speaking countries in Africa creating new training data and better responses to queries. It’s a painstaking and laborious process that doesn’t get talked about nearly enough.

The workers do individual disconnected actions all day, every day — so called “tasks” — working for companies like Remotasks, a subsidiary of Scale AI, and doing a huge amount of the work behind OpenAI.

AI doesn’t remove human effort. It just makes it much more alienated.

There’s an obvious hack here. If you are an AI task worker, your goal is to get paid as much as possible without too much effort. So why not use some of the well-known tools for this sort of job? [New York]

Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.

Remember, the important AI use case is getting venture capital funding. Why buy or rent expensive computing when you can just pay people in poor countries to fake it? Many “AI” systems are just a fancier version of the original Mechanical Turk.

Facebook’s M from 2017 was an imitation of Apple’s Siri virtual assistant. The trick was that hard queries would be punted to a human. Over 70% of queries ended up being answered by a human pretending to be the bot. M was shut down a year after launch.

Kaedim is a startup that claims to turn two-dimensional sketches into 3-D models using “machine learning.” The work is actually done entirely by human modelers getting paid $1-$4 per 15-minute job. But then, the founder, Konstantina Psoma, was a Forbes 30 Under 30. [404 Media; Forbes]

The LLM is for spam

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge]

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot]

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

Supertoys last all summer long

End users don’t actually want AI-based products. Machine learning systems can generate funny text and pictures to show your friends on social media. But even that’s wearing thin — users mostly see LLM output in the form of spam.

LLM writing style and image generator drawing style are now seen as signs of low quality work. You can certainly achieve artistic quality with AI manipulation, as in this music video — but even this just works on its novelty value. [YouTube]

For commercial purposes, the only use case for AI is still to replace quality work with cheap ersatz bot output — in the hope of beating down labor costs.

Even then, the AI just isn’t up to the task.

Microsoft put $10 billion into OpenAI. The Bing search engine added AI chat — and it had almost no effect on user numbers. It turns out that search engine users don’t want weird bot responses full of errors. [ZDNet]

The ChatGPT website’s visitor numbers went down 10% in June 2023. LLM text generators don’t deliver commercial results, and novelty only goes so far. [Washington Post]

After GPT-3 came out, OpenAI took three years to make an updated version. GPT-3.5 was released as a stop-gap in October 2022. Then GPT-4 finally came out in March 2023! But GPT-4 turns out to be eight instances of GPT-3 in a trenchcoat. The technology is running out of steam. [blog post; Twitter, archive]

Working at all will be in the next version

The deeper problem is that many AI systems simply don’t work. The 2022 paper “The fallacy of AI functionality” notes that AI systems are often “constructed haphazardly, deployed indiscriminately, and promoted deceptively.”

Still, machine learning systems do some interesting things, a few of which are even genuinely useful. We asked GitHub and they told us that they encourage their own employees to use the GitHub Copilot AI-based autocomplete system for their own internal coding — with due care and attention. We know of other coders who find Copilot to be far less work than doing the boilerplate by hand.

(Though Google has forbidden its coders from using its AI chatbot, Bard, to generate internal code.) [The Register]

Policy-makers and scholars — not just the media — tend to propagate AI hype. Even if they try to be cautious, they may work in terms of ethics of deployment, and presume that the systems do what they’re claimed to do — when they often just don’t.

Ethical considerations come after you’ve checked basic functionality. Always put functionality first. Does the system work? Way too often, it just doesn’t. Test and measure. [arXiv, PDF, 2022]

AI is the new crypto mining

In 2017, the hot buzzword was “blockchain” — because the price of bitcoin was going up. Struggling businesses would add the word “blockchain” to their name or their mission statement, in the hope their stock price would go up. Long Island Iced Tea became Long Blockchain and saw its shares surge 394%. Shares in biotech company Bioptix doubled in price when it changed its name to Riot Blockchain and pivoted to bitcoin mining. [Bloomberg, 2017, archive; Bloomberg, 2017, archive]

The same is now happening with AI. Only it’s not just the venture capitalists — even the crypto miners are pivoting to AI.

Bitcoin crashed last year and crypto mining is screwed. As far as we can work out, the only business plan was to get foolish investors’ money during the bubble, then go bankrupt.

In mid-2024, the bitcoin mining reward will halve again. So the mining companies are desperate to find other sources of income.

Ethereum moved to proof of stake in September 2022 and told its miners to just bugger off. Ethereum was mined on general-purpose video cards — so miners have a glut of slightly-charred number crunching machinery.

Hive Blockchain in Vancouver is pivoting to AI to repurpose its pile of video cards. It’s also changed its name to Hive Digital Technologies. [Bloomberg, archive; press release]

Marathon Digital claims that “over time you’re going to see that blockchain technologies and AI have a very tight coupling.” No, us neither. Marathon is doubling and tripling down on bitcoin mining — but, buzzwords! [Decrypt]

Nvidia makes the highest-performance video cards. The GPU processors on these cards turn out to be useful for massively parallel computations in general — such as running the calculations needed to train machine learning models. Nvidia is having an excellent year and its market cap is over $1 trillion.

So AI can take over from crypto in yet another way — carbon emissions from running all those video cards.

AI’s massive compute load doesn’t just generate carbon — it uses huge amounts of fresh water for cooling. Microsoft’s water usage went up 34% between 2021 and 2022, and they blame AI computation. ChatGPT uses about 500 mL of water every time you have a conversation with it. [AP]

We don’t yet have a Digiconomist of AI carbon emissions. Go start one.

Permalink
March 28, 2024 at 10:35:34 PM GMT+1

Cybersecurity is brokenhttps://crankysec.com/blog/broken/

  • Cybersecurity
  • Enshitification
  • Cybersecurity
  • Enshitification

Cybersecurity is broken

27 March 2024

It is a well-known fact that we dish out a whole lot of shit talk around these parts. And by "we" I mean me, but that's beside the point. Talking smack about 10-ply LinkedIn vCISOs is, quite honestly, pretty easy and kind of satisfying because some 8 out of 10 times they are stupid as fuck and deserve the heckling. The remaining 2 out of 10 are maybe trying to fight the good fight, and do right by their teams. Maybe. Don't you quote me on that figure. Actually, best you don't quote me at all because there are peeps out there saying things that are much more clever. Take this quote(?) from one Bob Metcalfe (tks, snowcrasher!)

"The Stockings Were Hung by the Chimney with Care"

The ARPA Computer Network is susceptible to security violations for at least
the three following reasons:

(1) Individual sites, used to physical limitations on machine access, have
not yet taken sufficient precautions toward securing their systems
against unauthorized remote use. For example, many people still use
passwords which are easy to guess: their fist names, their initials,
their host name spelled backwards, a string of characters which are
easy to type in sequence (e.g. ZXCVBNM).

(2) The TIP allows access to the ARPANET to a much wider audience than
is thought or intended. TIP phone numbers are posted, like those
scribbled hastily on the walls of phone booths and men's rooms. The
TIP required no user identification before giving service. Thus,
many people, including those who used to spend their time ripping off
Ma Bell, get access to our stockings in a most anonymous way.

(3) There is lingering affection for the challenge of breaking
someone's system. This affection lingers despite the fact that
everyone knows that it's easy to break systems, even easier to
crash them.

All of this would be quite humorous and cause for raucous eye
winking and elbow nudging, if it weren't for the fact that in
recent weeks at least two major serving hosts were crashed
under suspicious circumstances by people who knew what they
were risking; on yet a third system, the system wheel password
was compromised -- by two high school students in Los Angeles
no less.

We suspect that the number of dangerous security violations is
larger than any of us know is growing. You are advised
not to sit "in hope that Saint Nicholas would soon be there".

That's from 1973. The dude who invented Ethernet was worried about what we now call cybersecurity fifty fucking years ago. Several wake-up calls happened since then: phreaking peeps exploding the phones, hacker supergroups testifying in front of the US Senate on the topic of cybersecurity, hacker supergroups releasing super easy to use RATs, a cornucopia of malware, including shit made by nation-states, and ransomware attacks that are only profitable because some people just decided that an inefficient distributed database was worth some money. A lot of those issues were only made possible by people's insistence on using a programming language from half a century ago when better options are available. And that's just the technical side of things.

Take, for example, the Pen Test Partners' research on Peloton's API security. The researchers were able to grab a whole bunch of data that was supposed to be private, disclosed the issue to Peloton who, in turn, basically ghosted the researcher until a TechCrunch reporter got involved. Classic case of "we're not sorry we suck at protecting our customers' data, we're just sorry we got caught." I mean, if you need to get fucking TechCrunch involved to be taken seriously, the situation is hopeless.

Absolutely no amount of gentle pleas disguised as executive orders from the White House urging people to use memory-safe languages will solve the problem. CISA, despite all the phenomenal work they do, can't charge people who mishandle data with negligence; critical infrastructure involved or not. And maybe they should.

You see, cybersecurity is broken because of the lack of consequences. It's really that simple. When literally nothing happens when some stupid service gets popped and loses your data they had no business collecting in the first place, this kind of thing will happen over and over and over again. Why the fuck do you need my home address just so I can copy and paste some GIFs? Because you want to sell this data to data brokers, and you know there will be absolutely no negative consequences if you mishandle this data, fucking over the people who keep your business afloat. So, companies big and small fuck things up and we need to clean up the mess and face the consequences. Sounds about right.

Cybersecurity is even more broken when these companies that face zero consequences look at their payroll and think "Wait a fucking minute! Why the hell are we spending six full dollars a year on cybersecurity when we can, I dunno, do nothing at all for free because cybersecurity incidents will not negatively impact our bottomline whatsoever?" That's why you, my cybersecurity and infosec brethren, are getting laid off. That's why you don't have the tools you need. That's why you don't get the training you should. That's why you're overworked. That's why you're stuck as an underpaid individual contributor doing the work of 5 people for $75k/year while your CISO who makes $500k is on LinkedIn all day writing stupid shit about AI.

Cybersecurity is broken because fixing it benefits no one but the regular, unremarkable, salt of the earth folks. And, according to the powers that be, fuck them folks. Fixing it requires strong data protection laws, but passing laws is just not something the overwhelming majority of legislative bodies in the world do. Passing laws that slightly inconvenience multi-billion dollar corporations while improving the lives of the plebes is even more of a tall order. And that's true for a whole lot of things that have nothing to do with cybersecurity, but this is a blog about cybersecurity, so please bear with me.

That's the answer: data protection laws. You get my data for rEaSoNs, and you fuck it up? You should pay a hefty price that cannot be written off as the cost of doing business. You make data brokers illegal, or, at the very least, way less profitable. You do what the payment card industry has been doing for decades: you tell everyone handling your data that they have to follow a very comprehensive set of data protection rules, lest they get fined or cut off entirely. A group of four credit card companies can do that, so I'm sure mighty governments can, too.

But how do we push things in the right direction? Well, that's one of the many topics we discuss in our Discord server (Hey you guys!). Not only are my fellow Crankies inspiring the shit out of me every day, we have bigger plans than just shitposting and commiserating. Turns out that buying a congressperson lobbying is not that expensive, really. We are working on something that we hope will help lift everyone in this industry up. As I once wrote on that very Discord: "When we abstain from using our collective power of influence, we lose by default." Or "you miss 100% of the shots you don't take" or whatever the fuck Gretzky said. We're about 700-strong and planning on doing great things. Come join us because the movement cannot be stopped.

Previous: Pigeons As Far As The Eye Can See

Permalink
March 28, 2024 at 10:32:11 PM GMT+1

Pluralistic: “If buying isn’t owning, piracy isn’t stealing” (08 Dec 2023) – Pluralistic: Daily links from Cory Doctorowhttps://pluralistic.net/2023/12/08/playstationed/#tyler-james-hill

  • Enshitification
  • Big Corpo
  • Planned Obsolescence
  • Enshitification
  • Big Corpo
  • Planned Obsolescence

Pluralistic: "If buying isn't owning, piracy isn't stealing" (08 Dec 2023)

20 years ago, I got in a (friendly) public spat with Chris Anderson, who was then the editor in chief of Wired. I'd publicly noted my disappointment with glowing Wired reviews of DRM-encumbered digital devices, prompting Anderson to call me unrealistic for expecting the magazine to condemn gadgets for their DRM:

https://longtail.typepad.com/the_long_tail/2004/12/is_drm_evil.html

I replied in public, telling him that he'd misunderstood. This wasn't an issue of ideological purity – it was about good reviewing practice. Wired was telling readers to buy a product because it had features x, y and z, but at any time in the future, without warning, without recourse, the vendor could switch off any of those features:

https://memex.craphound.com/2004/12/29/cory-responds-to-wired-editor-on-drm/

I proposed that all Wired endorsements for DRM-encumbered products should come with this disclaimer:

WARNING: THIS DEVICE’S FEATURES ARE SUBJECT TO REVOCATION WITHOUT NOTICE, ACCORDING TO TERMS SET OUT IN SECRET NEGOTIATIONS. YOUR INVESTMENT IS CONTINGENT ON THE GOODWILL OF THE WORLD’S MOST PARANOID, TECHNOPHOBIC ENTERTAINMENT EXECS. THIS DEVICE AND DEVICES LIKE IT ARE TYPICALLY USED TO CHARGE YOU FOR THINGS YOU USED TO GET FOR FREE — BE SURE TO FACTOR IN THE PRICE OF BUYING ALL YOUR MEDIA OVER AND OVER AGAIN. AT NO TIME IN HISTORY HAS ANY ENTERTAINMENT COMPANY GOTTEN A SWEET DEAL LIKE THIS FROM THE ELECTRONICS PEOPLE, BUT THIS TIME THEY’RE GETTING A TOTAL WALK. HERE, PUT THIS IN YOUR MOUTH, IT’LL MUFFLE YOUR WHIMPERS.

Wired didn't take me up on this suggestion.

But I was right. The ability to change features, prices, and availability of things you've already paid for is a powerful temptation to corporations. Inkjet printers were always a sleazy business, but once these printers got directly connected to the internet, companies like HP started pushing out "security updates" that modified your printer to make it reject the third-party ink you'd paid for:

See also : Ink stained wretches battle soul digital freedom taking place inside your printer

Now, this scam wouldn't work if you could just put things back the way they were before the "update," which is where the DRM comes in. A thicket of IP laws make reverse-engineering DRM-encumbered products into a felony. Combine always-on network access with indiscriminate criminalization of user modification, and the enshittification will follow, as surely as night follows day.

This is the root of all the right to repair shenanigans. Sure, companies withhold access to diagnostic codes and parts, but codes can be extracted and parts can be cloned. The real teeth in blocking repair comes from the law, not the tech. The company that makes McDonald's wildly unreliable McFlurry machines makes a fortune charging franchisees to fix these eternally broken appliances. When a third party threatened this racket by reverse-engineering the DRM that blocked independent repair, they got buried in legal threats:

See also : https://pluralistic.net/2021/04/20/euthanize-rentier-enablers/#cold-war

Everybody loves this racket. In Poland, a team of security researchers at the OhMyHack conference just presented their teardown of the anti-repair features in NEWAG Impuls locomotives. NEWAG boobytrapped their trains to try and detect if they've been independently serviced, and to respond to any unauthorized repairs by bricking themselves:

See also : https://mamot.fr/@q3k@hackerspace.pl/111528162905209453

Poland is part of the EU, meaning that they are required to uphold the provisions of the 2001 EU Copyright Directive, including Article 6, which bans this kind of reverse-engineering. The researchers are planning to present their work again at the Chaos Communications Congress in Hamburg this month – Germany is also a party to the EUCD. The threat to researchers from presenting this work is real – but so is the threat to conferences that host them:

See also : https://www.cnet.com/tech/services-and-software/researchers-face-legal-threats-over-sdmi-hack/

20 years ago, Chris Anderson told me that it was unrealistic to expect tech companies to refuse demands for DRM from the entertainment companies whose media they hoped to play. My argument – then and now – was that any tech company that sells you a gadget that can have its features revoked is defrauding you. You're paying for x, y and z – and if they are contractually required to remove x and y on demand, they are selling you something that you can't rely on, without making that clear to you.

But it's worse than that. When a tech company designs a device for remote, irreversible, nonconsensual downgrades, they invite both external and internal parties to demand those downgrades. Like Pavel Chekov says, a phaser on the bridge in Act I is going to go off by Act III. Selling a product that can be remotely, irreversibly, nonconsensually downgraded inevitably results in the worst person at the product-planning meeting proposing to do so. The fact that there are no penalties for doing so makes it impossible for the better people in that meeting to win the ensuing argument, leading to the moral injury of seeing a product you care about reduced to a pile of shit:

See also : https://pluralistic.net/2023/11/25/moral-injury/#enshittification

But even if everyone at that table is a swell egg who wouldn't dream of enshittifying the product, the existence of a remote, irreversible, nonconsensual downgrade feature makes the product vulnerable to external actors who will demand that it be used. Back in 2022, Adobe informed its customers that it had lost its deal to include Pantone colors in Photoshop, Illustrator and other "software as a service" packages. As a result, users would now have to start paying a monthly fee to see their own, completed images. Fail to pay the fee and all the Pantone-coded pixels in your artwork would just show up as black:

See also : https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process

Adobe blamed this on Pantone, and there was lots of speculation about what had happened. Had Pantone jacked up its price to Adobe, so Adobe passed the price on to its users in the hopes of embarrassing Pantone? Who knows? Who can know? That's the point: you invested in Photoshop, you spent money and time creating images with it, but you have no way to know whether or how you'll be able to access those images in the future. Those terms can change at any time, and if you don't like it, you can go fuck yourself.

These companies are all run by CEOs who got their MBAs at Darth Vader University, where the first lesson is "I have altered the deal, pray I don't alter it further." Adobe chose to design its software so it would be vulnerable to this kind of demand, and then its customers paid for that choice. Sure, Pantone are dicks, but this is Adobe's fault. They stuck a KICK ME sign to your back, and Pantone obliged.

This keeps happening and it's gonna keep happening. Last week, Playstation owners who'd bought (or "bought") Warner TV shows got messages telling them that Warner had walked away from its deal to sell videos through the Playstation store, and so all the videos they'd paid for were going to be deleted forever. They wouldn't even get refunds (to be clear, refunds would also be bullshit – when I was a bookseller, I didn't get to break into your house and steal the books I'd sold you, not even if I left some cash on your kitchen table).

Sure, Warner is an unbelievably shitty company run by the single most guillotineable executive in all of Southern California, the loathsome David Zaslav, who oversaw the merger of Warner with Discovery. Zaslav is the creep who figured out that he could make more money cancelling completed movies and TV shows and taking a tax writeoff than he stood to make by releasing them:

See also : https://aftermath.site/there-is-no-piracy-without-ownership

Imagine putting years of your life into making a program – showing up on set at 5AM and leaving your kids to get their own breakfast, performing stunts that could maim or kill you, working 16-hour days during the acute phase of the covid pandemic and driving home in the night, only to have this absolute turd of a man delete the program before anyone could see it, forever, to get a minor tax advantage. Talk about moral injury!

But without Sony's complicity in designing a remote, irreversible, nonconsensual downgrade feature into the Playstation, Zaslav's war on art and creative workers would be limited to material that hadn't been released yet. Thanks to Sony's awful choices, David Zaslav can break into your house, steal your movies – and he doesn't even have to leave a twenty on your kitchen table.

The point here – the point I made 20 years ago to Chris Anderson – is that this is the foreseeable, inevitable result of designing devices for remote, irreversible, nonconsensual downgrades. Anyone who was paying attention should have figured that out in the GW Bush administration. Anyone who does this today? Absolute flaming garbage.

Sure, Zaslav deserves to be staked out over an anthill and slathered in high-fructose corn syrup. But save the next anthill for the Sony exec who shipped a product that would let Zaslav come into your home and rob you. That piece of shit knew what they were doing and they did it anyway. Fuck them. Sideways. With a brick.

Meanwhile, the studios keep making the case for stealing movies rather than paying for them. As Tyler James Hill wrote: "If buying isn't owning, piracy isn't stealing":

See also : https://bsky.app/profile/tylerjameshill.bsky.social/post/3kflw2lvam42n

Permalink
December 10, 2023 at 4:06:07 PM GMT+1

Médias : les premières expériences 100 % IA | Les Echoshttps://www.lesechos.fr/tech-medias/medias/ces-sites-dinformations-deja-ecrits-par-des-ia-2038642

  • Artificial intelligence
  • Enshitification
  • Press
  • Artificial intelligence
  • Enshitification
  • Press

Médias : les premières expériences 100 % IA

Certains sites expérimentent l'utilisation de robots conversationnels pour écrire tout leur contenu et se revendiquent comme tels. A l'inverse, d'autres le cachent, provoquant de l'inquiétude.

Par Marina Alcaraz

Publié le 1 déc. 2023 à 13:43Mis à jour le 6 déc. 2023 à 17:59

Dans la présentation du site, il y a leurs parcours, leurs centres d'intérêt, leurs photos… Ils ont un style bien à eux, et des couvertures de sujets spécifiques. Carine Gravée, Vianney Garet, Nina Gavetière ont des noms, mais ne sont pas réels : ce sont des journalistes-robots créés de toutes pièces.

Certains sites réalisés entièrement par l'intelligence artificielle (IA) commencent à apparaître. Certes, pour l'heure, les initiatives restent limitées. Mais certains tentent de créer une niche, d'expérimenter un nouveau mode de création des contenus, en le revendiquant clairement… ou pas.

Magazine en kiosque

C'est par exemple le cas du magazine (papier et Web) « LHC - Les Heures Claires », qui se présente comme le premier magazine français généré à 99 % par l'IA, lancé il y a quelques semaines. Comme un support classique, il comporte des actualités, des interviews (avec des questions posées par un robot), des pages consacrées à la mode ou un horoscope.

A la manoeuvre, Rémy Rostan, ancien photographe. « Je suis toujours assez surpris par ce que propose ChatGPT », avoue-t-il. Le magazine sponsorisé par Easy Partner, cabinet de recrutement spécialisé dans le numérique, a vocation à être lancé en kiosque au printemps avec une fréquence mensuelle. « Je vise la communauté des technophiles et des curieux », explique Rémy Rostan, qui espère atteindre les 20.000 ventes.

Autres exemples : Tech Generation et Cuisine Generation, lancés au printemps par un consultant spécialisé en innovation chez Viseo, Ari Kouts. Il a connecté le site tech avec différents journaux spécialisés dans ce secteur, comme TechCrunch. Chaque « journaliste » fictif (qui a son style bien à lui) reprend donc des articles de presse sur des sujets d'actualité (la crise chez OpenAI, des déclarations de Musk…), les réécrit en donnant la source et ce, sans aucune intervention humaine. Au final, quelques incohérences, des maladresses mais des articles qui ressemblent à certains billets de blogs.

Dans la cuisine, les « chefs » imaginent plusieurs recettes « et bon nombre sont plausibles et même bonnes, même si les temps de cuisson sont approximatifs », estime Ari Kouts. C'est plus à titre d'expérience que le consultant a lancé ces « médias », sans volonté de les monétiser. « Cela permet aussi de rappeler l'intérêt de l'analyse, de l'enquête journalistique que des robots ne peuvent pas faire », assure-t-il.

Les deux sites ont une petite audience (autour de 3.000 visites par mois) et ressortent quelquefois dans Google News ! Même si la probabilité est faible, dans ce cas, puisqu'il s'agit d'une expérimentation un peu comme un jeu, « les sources primaires pourraient empêcher ce type de pratiques en invoquant le parasitisme, c'est-à-dire s'approprier la valeur d'un article », indique Julien Guinot-Deléry, avocat chez Gide.

Craintes des professionnels

Mais il existe aussi des sites dont le mode de production a été passé sous silence. « Dans un groupe de travail de la Commission paritaire des publications et agences de presse, la crainte qu'il y ait des sites avec une forte composante d'IA a été évoquée », dit un professionnel. « On a tous ce risque en tête », appuie Pierre Pétillault, directeur de l'Alliance de la presse d'information générale.

Dans une étude récente, Newsguard a identifié foule de sites avec des articles réécrits avec l'IA (presque 600 à fin novembre !), sans supervision humaine. Et dans nombre de cas, ils bénéficient de publicité programmatique. Aux Etats-Unis, « Sports Illustrated » ou « TheStreet » (Arena Group) ont été pointés par une enquête du média Futurism. Des articles auraient été écrits par des IA et de faux profils de journalistes créés (avec les images achetées sur un site proposant des photos générées par IA), ce qui a provoqué la colère des journalistes. Le groupe de médias s'est défendu, indiquant avoir acheté certains papiers à une agence.

Permalink
December 10, 2023 at 3:51:38 PM GMT+1

I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires) | Jane Friedmanhttps://janefriedman.com/i-would-rather-see-my-books-pirated/

  • Artificial intelligence
  • Societal Collapse
  • Mass Consumption
  • Enshitification
  • Artificial intelligence
  • Societal Collapse
  • Mass Consumption
  • Enshitification

I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)

Updated: August 8, 2023

First Published: August 7, 2023 by Jane Friedman 60 Comments

Update (afternoon of Aug. 7): Hours after this post was published, my official Goodreads profile was cleaned of the offending titles. I did file a report with Amazon, complaining that these books were using my name and reputation without my consent. Amazon’s response: “Please provide us with any trademark registration numbers that relate to your claim.” When I replied that I did not have a trademark for my name, they closed the case and said the books would not be removed from sale.

Update (morning of Aug. 8): The fraudulent titles appear to be entirely removed from Amazon and Goodreads alike. I’m sure that’s in no small part due to my visibility and reputation in the writing and publishing community. What will authors with smaller profiles do when this happens to them? If you ever find yourself in a similar situation, I’d start by reaching out to an advocacy organization like The Authors Guild (I’m a member).

Update (evening of Aug. 8): Since these fake books have been removed, I’ve added titles and screenshots below, as well as an explanation of why I believe the books are AI generated.


There’s not much that makes me angry these days about writing and publishing. I’ve seen it all. I know what to expect from Amazon and Goodreads. Meaning: I don’t expect much, and I assume I will be continually disappointed. Nor do I have the power to change how they operate. My energy-saving strategy: move on and focus on what you can control.

That’s going to become much harder to do if Amazon and Goodreads don’t start defending against the absolute garbage now being spread across their sites.

I know my work gets pirated and frankly I don’t care. (I’m not saying other authors shouldn’t care, but that’s not a battle worth my time today.)

But here’s what does rankle me: garbage books getting uploaded to Amazon where my name is credited as the author, such as:

  • A Step-by-Step Guide to Crafting Compelling eBooks, Building a Thriving Author Platform, and Maximizing Profitability
  • How to Write and Publish an eBook Quickly and Make Money
  • Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon
  • Publishing Power: Navigating Amazon’s Kindle Direct Publishing
  • Igniting Ideas: Your Guide to Writing a Bestseller eBook on Amazon

Whoever’s doing this is obviously preying on writers who trust my name and think I’ve actually written these books. I have not. Most likely they’ve been generated by AI. (Why do I think this? I’ve used these AI tools extensively to test how well they can reproduce my knowledge. I also do a lot of vanity prompting, like “What would Jane Friedman say about building author platform?” I’ve been blogging since 2009—there’s a lot of my content publicly available for training AI models. As soon as I read the first pages of these fake books, it was like reading ChatGPT responses I had generated myself.)

It might be possible to ignore this nonsense on some level since these books aren’t receiving customer reviews (so far), and mostly they sink to the bottom of search results (although not always). At the very least, if you look at my author profile on Amazon, these junk books don’t appear. A reader who applies some critical thinking might think twice before accepting these books as mine.

Still, it’s not great. And it falls on me, the author—the one with a reputation at stake—to get these misleading books removed from Amazon. I’m not even sure it’s possible. I don’t own the copyright to these junk books. I don’t exactly “own” my name either—lots of other people who are also legit authors share my name, after all. So on what grounds can I successfully demand this stop, at least in Amazon’s eyes? I’m not sure.

To add insult to injury, these sham books are getting added to my official Goodreads profile. A reasonable person might think I control what books are shown on my Goodreads profile, or that I approve them, or at the very least I could have them easily removed. Not so.

If you need to have your Goodreads profile corrected—as far as the books credited to you—you have to reach out to volunteer “librarians” on Goodreads, which requires joining a group, then posting in a comment thread that you want illegitimate books removed from your profile.

When I complained about this on Twitter/X, an author responded that she had to report 29 illegitimate books in just the last week alone. 29!

With the flood of AI content now published at Amazon, sometimes attributed to authors in a misleading or fraudulent manner, how can anyone reasonably expect working authors to spend every week for the rest of their lives policing this? And if authors don’t police it, they will certainly hear about it, from readers concerned about these garbage books, and from readers who credulously bought this crap and have complaints. Or authors might not hear any thing at all, and lose a potential reader forever.

We desperately need guardrails on this landslide of misattribution and misinformation. Amazon and Goodreads, I beg you to create a way to verify authorship, or for authors to easily block fraudulent books credited to them. Do it now, do it quickly.

Unfortunately, even if and when you get these insane books removed from your official profiles, they will still be floating around out there, with your name, on two major sites that gets millions of visitors, just waiting to be “discovered.” And there’s absolutely nothing you can do about it.

Permalink
August 9, 2023 at 11:01:20 AM GMT+2

ChatGPT Is a Blurry JPEG of the Web | The New Yorkerhttps://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

  • Artificial intelligence
  • Enshitification
  • Big Data
  • Big Tech
  • Artificial intelligence
  • Enshitification
  • Big Data
  • Big Tech

ChatGPT Is a Blurry JPEG of the Web

OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?

By Ted Chiang February 9, 2023

In 2013, workers at a German construction company noticed something odd about their Xerox photocopier: when they made a copy of the floor plan of a house, the copy differed from the original in a subtle but significant way. In the original floor plan, each of the house’s three rooms was accompanied by a rectangle specifying its area: the rooms were 14.13, 21.11, and 17.42 square metres, respectively. However, in the photocopy, all three rooms were labelled as being 14.13 square metres in size. The company contacted the computer scientist David Kriesel to investigate this seemingly inconceivable result. They needed a computer scientist because a modern Xerox photocopier doesn’t use the physical xerographic process popularized in the nineteen-sixties. Instead, it scans the document digitally, and then prints the resulting image file. Combine that with the fact that virtually every digital image file is compressed to save space, and a solution to the mystery begins to suggest itself.

Compressing a file requires two steps: first, the encoding, during which the file is converted into a more compact format, and then the decoding, whereby the process is reversed. If the restored file is identical to the original, then the compression process is described as lossless: no information has been discarded. By contrast, if the restored file is only an approximation of the original, the compression is described as lossy: some information has been discarded and is now unrecoverable. Lossless compression is what’s typically used for text files and computer programs, because those are domains in which even a single incorrect character has the potential to be disastrous. Lossy compression is often used for photos, audio, and video in situations in which absolute accuracy isn’t essential. Most of the time, we don’t notice if a picture, song, or movie isn’t perfectly reproduced. The loss in fidelity becomes more perceptible only as files are squeezed very tightly. In those cases, we notice what are known as compression artifacts: the fuzziness of the smallest JPEG and MPEG images, or the tinny sound of low-bit-rate MP3s.

Xerox photocopiers use a lossy compression format known as JBIG2, designed for use with black-and-white images. To save space, the copier identifies similar-looking regions in the image and stores a single copy for all of them; when the file is decompressed, it uses that copy repeatedly to reconstruct the image. It turned out that the photocopier had judged the labels specifying the area of the rooms to be similar enough that it needed to store only one of them—14.13—and it reused that one for all three rooms when printing the floor plan.

The fact that Xerox photocopiers use a lossy compression format instead of a lossless one isn’t, in itself, a problem. The problem is that the photocopiers were degrading the image in a subtle way, in which the compression artifacts weren’t immediately recognizable. If the photocopier simply produced blurry printouts, everyone would know that they weren’t accurate reproductions of the originals. What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t. (In 2014, Xerox released a patch to correct this issue.)

I think that this incident with the Xerox photocopier is worth bearing in mind today, as we consider OpenAI’s ChatGPT and other similar programs, which A.I. researchers call large language models. The resemblance between a photocopier and a large language model might not be immediately apparent—but consider the following scenario. Imagine that you’re about to lose your access to the Internet forever. In preparation, you plan to create a compressed copy of all the text on the Web, so that you can store it on a private server. Unfortunately, your private server has only one per cent of the space needed; you can’t use a lossless compression algorithm if you want everything to fit. Instead, you write a lossy algorithm that identifies statistical regularities in the text and stores them in a specialized file format. Because you have virtually unlimited computational power to throw at this task, your algorithm can identify extraordinarily nuanced statistical regularities, and this allows you to achieve the desired compression ratio of a hundred to one.

Now, losing your Internet access isn’t quite so terrible; you’ve got all the information on the Web stored on your server. The only catch is that, because the text has been so highly compressed, you can’t look for information by searching for an exact quote; you’ll never get an exact match, because the words aren’t what’s being stored. To solve this problem, you create an interface that accepts queries in the form of questions and responds with answers that convey the gist of what you have on your server.

What I’ve described sounds a lot like ChatGPT, or most any other large language model. Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.

This analogy makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation—that is, estimating what’s missing by looking at what’s on either side of the gap. When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them. (“When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .”) ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

Given that large language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large language models, but there is another aspect to the compression analogy that is worth considering. Since 2006, an A.I. researcher named Marcus Hutter has offered a cash reward—known as the Prize for Compressing Human Knowledge, or the Hutter Prize—to anyone who can losslessly compress a specific one-gigabyte snapshot of Wikipedia smaller than the previous prize-winner did. You have probably encountered files compressed using the zip file format. The zip format reduces Hutter’s one-gigabyte file to about three hundred megabytes; the most recent prize-winner has managed to reduce it to a hundred and fifteen megabytes. This isn’t just an exercise in smooshing. Hutter believes that better text compression will be instrumental in the creation of human-level artificial intelligence, in part because the greatest degree of compression can be achieved by understanding the text.

To grasp the proposed relationship between compression and understanding, imagine that you have a text file containing a million examples of addition, subtraction, multiplication, and division. Although any compression algorithm could reduce the size of this file, the way to achieve the greatest compression ratio would probably be to derive the principles of arithmetic and then write the code for a calculator program. Using a calculator, you could perfectly reconstruct not just the million examples in the file but any other example of arithmetic that you might encounter in the future. The same logic applies to the problem of compressing a slice of Wikipedia. If a compression program knows that force equals mass times acceleration, it can discard a lot of words when compressing the pages about physics because it will be able to reconstruct them. Likewise, the more the program knows about supply and demand, the more words it can discard when compressing the pages about economics, and so forth.

Large language models identify statistical regularities in text. Any analysis of the text of the Web will reveal that phrases like “supply is low” often appear in close proximity to phrases like “prices rise.” A chatbot that incorporates this correlation might, when asked a question about the effect of supply shortages, respond with an answer about prices increasing. If a large language model has compiled a vast number of correlations between economic terms—so many that it can offer plausible responses to a wide variety of questions—should we say that it actually understands economic theory? Models like ChatGPT aren’t eligible for the Hutter Prize for a variety of reasons, one of which is that they don’t reconstruct the original text precisely—i.e., they don’t perform lossless compression. But is it possible that their lossy compression nonetheless indicates real understanding of the sort that A.I. researchers are interested in?

Let’s go back to the example of arithmetic. If you ask GPT-3 (the large-language model that ChatGPT was built from) to add or subtract a pair of numbers, it almost always responds with the correct answer when the numbers have only two digits. But its accuracy worsens significantly with larger numbers, falling to ten per cent when the numbers have five digits. Most of the correct answers that GPT-3 gives are not found on the Web—there aren’t many Web pages that contain the text “245 + 821,” for example—so it’s not engaged in simple memorization. But, despite ingesting a vast amount of information, it hasn’t been able to derive the principles of arithmetic, either. A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic. The Web certainly contains explanations of carrying the “1,” but GPT-3 isn’t able to incorporate those explanations. GPT-3’s statistical analysis of examples of arithmetic enables it to produce a superficial approximation of the real thing, but no more than that.

Given GPT-3’s failure at a subject taught in elementary school, how can we explain the fact that it sometimes appears to perform well at writing college-level essays? Even though large language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory. Perhaps arithmetic is a special case, one for which large language models are poorly suited. Is it possible that, in areas outside addition and subtraction, statistical regularities in text actually do correspond to genuine knowledge of the real world?

I think there’s a simpler explanation. Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

A lot of uses have been proposed for large language models. Thinking about them as blurry JPEGs offers a way to evaluate what they might or might not be well suited for. Let’s consider a few scenarios.

Can large language models take the place of traditional search engines? For us to have confidence in them, we would need to know that they haven’t been fed propaganda and conspiracy theories—we’d need to know that the JPEG is capturing the right sections of the Web. But, even if a large language model includes only the information we want, there’s still the matter of blurriness. There’s a type of blurriness that is acceptable, which is the re-stating of information in different words. Then there’s the blurriness of outright fabrication, which we consider unacceptable when we’re looking for facts. It’s not clear that it’s technically possible to retain the acceptable kind of blurriness while eliminating the unacceptable kind, but I expect that we’ll find out in the near future.

Even if it is possible to restrict large language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.

There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large language models and lossy compression is useful. Repeatedly resaving a JPEG creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.

Indeed, a useful criterion for gauging a large language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either. Conversely, if a model starts generating text so good that it can be used to train new models, then that should give us confidence in the quality of that text. (I suspect that such an outcome would require a major breakthrough in the techniques used to build these models.) If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.

Can large language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question. There is a genre of art known as Xerox art, or photocopy art, in which artists use the distinctive properties of photocopiers as creative tools. Something along those lines is surely possible with the photocopier that is ChatGPT, so, in that sense, the answer is yes. But I don’t think that anyone would claim that photocopiers have become an essential tool in the creation of art; the vast majority of artists don’t use them in their creative process, and no one argues that they’re putting themselves at a disadvantage with that choice.

So let’s assume that we’re not talking about a new genre of writing that’s analogous to Xerox art. Given that stipulation, can the text generated by large language models be a useful starting point for writers to build off when writing something original, whether it’s fiction or nonfiction? Will letting a large language model handle the boilerplate allow writers to focus their attention on the really creative parts?

Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isn’t a good way to create original work. If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.

And it’s not the case that, once you have ceased to be a student, you can safely use the template that a large language model provides. The struggle to express your thoughts doesn’t disappear once you graduate—it can take place every time you start drafting a new piece. Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.

There’s nothing magical or mystical about writing, but it involves more than placing an existing document on an unreliable photocopier and pressing the Print button. It’s possible that, in the future, we will build an A.I. that is capable of writing good prose based on nothing but its own experience of the world. The day we achieve that will be momentous indeed—but that day lies far beyond our prediction horizon. In the meantime, it’s reasonable to ask, What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry JPEG, when you still have the original?

More Science and Technology

  • Can we stop runaway A.I.?
  • Saving the climate will depend on blue-collar workers. Can we train enough of them before time runs out?
  • There are ways of controlling A.I.—but first we need to stop mythologizing it.
  • A security camera for the entire planet.
  • What’s the point of reading writing by humans?
  • A heat shield for the most important ice on Earth.
  • The climate solutions we can’t live without.
Permalink
June 20, 2023 at 4:48:49 PM GMT+2

La publicité, ou comment transformer l’être humain en rouage docile de la machine capitaliste – réveil-mutinhttps://reveilmutin.wordpress.com/2015/06/07/la-publicite-une-entreprise-totalitaire/

  • Enshitification
  • Mass Consumption
  • Big Corpo
  • Societal Collapse
  • Ads Madness
  • Attention Economy
  • Enshitification
  • Mass Consumption
  • Big Corpo
  • Societal Collapse
  • Ads Madness
  • Attention Economy

La publicité, ou comment transformer l’être humain en rouage docile de la machine capitaliste

« La publicité est partout, à tel point qu’elle est devenue invisible pour l’homo economicus que nous sommes. Nous croyons la dominer alors que nous la subissons. Mais c’est bien elle qui joue avec nous, qui s’impose pour modeler nos comportements et notre environnement. »

« Nous ne mettons pas en cause les activités publicitaires en tant que mise à disposition du public d’informations commerciales, mais nous refusons la violence et la manipulation dont usent les procédés publicitaires, auquel nul ne peut échapper et qui diffusent en permanence l’idéologie dominante. »

La publicité, un matraquage auquel personne ne doit échapper

« La publicité se conjugue avec matraquage (et c’est un point commun avec les techniques des régimes dictatoriaux). Elle est totalitaire car elle cherche à envahir la totalité de l’espace avec des panneaux (souvent surdimensionnés) qui accaparent notre environnement, des affiches recouvrant tout un immeuble, dix publicités identiques à la suite dans le métro, des « tunnels » publicitaires de 20 minutes à la télévision ou à la radio etc. Il devient difficile de poser son regard sur un paysage libre de publicité. Elle s’immisce dans les films à la télévision ou au cinéma en les coupant ou en les pénétrant, plus discrètement, puisque les entreprises payent les cinéastes pour y montrer leurs produits.

La publicité envahit le temps également. Elle rythme toute la vie collective sur le mode de la consommation avec les commémorations, les fêtes, les événements réels ou factices. Tout est bon pour faire consommer. De la même façon, la publicité se saisit de tous les âges de l’existence individuelle pour en faire des moments de consommation, avec les enfants avides de jouets, les adolescents couverts de marques, les hommes motorisant leur vie et devenant fan de gadgets en tout genre, les femmes avec la mode et la beauté mais aussi la ménagère, la mère de famille ou encore les personnes âgées invitées à préparer leur avenir. Si notre espace est accaparé par la publicité, notre temporalité est rythmé par les slogans publicitaires, signe que notre civilisation glisse doucement mais sûrement vers un totalitarisme publicitaire puisqu’il est difficile d’y échapper (frontalement ou indirectement (…)). »

Un maillon essentiel du système capitaliste

« La publicité n’est pas neutre car elle existe pour promouvoir le système capitaliste et un modèle de société fondé sur la consommation. Elle contribue à déconstruire notre réflexion pour mieux nous aliéner. L’omniprésence de la publicité vise à réduire la totalité de l’être humain à la seule dimension de la consommation.

Loin d’être synonyme de liberté, la publicité participe à façonner l’idéologie dominante, à la soutenir afin que chacun prenne part à l’effort de Croissance. Elle est un maillon essentiel de l’activité économique. La publicité suscite le besoin, matraque pour ne pas dire impose un mode de vie. Elle est un rouage essentiel pour créer davantage de besoins, rendre obsolètes les produits, renouveler notre imaginaire afin de nous nourrir de nouveautés et abreuver le système en croissance par la mise en avant de biens et de services marchands. La publicité nous manipule plus qu’elle n’informe.

Modeler nos pensées et nos comportements pour nous conformer à l’impératif de Croissance

« La mainmise de la publicité sur la société est insidieuse puisqu’elle fait de la consommation un fondement de notre société, en nous incitant à penser en terme de consommation. Surtout, en plus de vouloir nous faire dépenser, la publicité vise à nous faire dé-penser en divulguant à feu doux l’idéologie capitaliste, de façon discrète et sournoise à tel point que nous n’avons toujours pas conscience d’être embrigadé. Elle est complice d’une vaste manipulation afin de tout marchandiser et asseoir la société de Croissance, l’alimenter sans cesse en nouveautés, créer de nouveaux besoins et acter l’obsolescence. Elle développe une idéologie et un mode de vie qui n’est pas neutre car directement au service du capitalisme. »

« [La publicité] a recours sans cesse aux sophismes de l’image, falsifie le sens des mots, déstructure l’ordre logique. Ce langage, dès le plus jeune âge, façonne ainsi des modes de pensée qui sont à l’opposé de la raison cartésienne. La règle publicitaire, qui oblige à faire du produit un spectacle, conduit ainsi l’enfant à confondre l’image et la chose, le visible et le réel. Règne ensuite la culture de l’amalgame : la publicité pratique cette rhétorique de l’association selon laquelle n’importe quelle valeur peut être associée à n’importe quel produit (la beauté du sport par exemple à la célébration de boissons alcoolisées). La rythmique publicitaire (jingles, chocs et frissons), les montages chaotiques et “déments”, en tentant de happer au lieu de convaincre, en nous saisissant viscéralement pour contourner nos résistances conscientes, font prédominer les conduites-réflexes sur toute démarche réfléchie. En liaison avec la trépidation médiatique et les oripeaux de la société du spectacle, c’est toute une relation au monde fondée sur l’adhésion sensorielle et le refus de penser que tisse la publicité dans la “conscience collective“. Ce mode de pensée instaure le règne de l’impensé. »

« L’omniprésence quantitative du phénomène publicitaire entraîne (…) un changement qualitatif dans sa façon d’imposer ses modèles : sa norme paraît “ normale ”. Pour mieux faire passer son message, la publicité a su évoluer en abandonnant le ton impératif pour introduire le mode indicatif. Celui-ci est, finalement, plus contraignant que le mode impératif dans la mesure où l’on ne peut pas se distancier de ses ordres. Qui plus est, cette normalité est commune à tous : la collectivité semble s’y être déjà pliée. Les modèles de consommation deviennent alors d’autant plus coercitifs qu’ils sont supposés massivement répandus. »

Une entreprise totalitaire

« La publicité, sous ses aspects festifs et joyeux, sympathiques et drôles, est une dangereuse propagande qui casse, image après image, le sens de la vie. »

« La publicité est une machine à casser la personne humaine. Elle ne veut plus d’humains, de citoyens, elle veut des consommateurs. La publicité réduit chacun de nous à un moyen : la consommation. La publicité nous impose la fausse idée que l’unique sens de la vie est la consommation. »

« [Le] phénomène publicitaire ne consiste pas en une simple somme d’annonces disparates : elle est un système. Et ce système, si on l’observe bien, non seulement tend à occuper la totalité du champ des activités humaines – y compris ses aspects les plus immatériels – dans la seule consommation. À deux niveaux, celui de l’invasion quantitative et celui de la pénétration idéologique, la publicité est bien une entreprise totalitaire.

« La publicité participe activement à la fabrication d’un modèle de société en imaginant un bonheur formaté fondé sur la consommation. La publicité célèbre l’ordre des « décideurs » et le bonheur des consommations inutiles. Au-delà de la manipulation et du mépris, la publicité dévoile une philosophie cynique qui entend transformer le monde en marchandise. La voix de la publicité est insidieuse car invisible et douce, nous sommes éduqués avec ou alors elle nous fait rêver et, elle contribue à imposer un « totalitarisme tranquille » c’est-à-dire un mode de vie imposé où la norme est la règle et ne pas la respecter signifie l’exclusion et où les victimes sont les bourreaux sans que nous puissions nous condamner. Articulation essentielle de la société capitaliste, la publicité fait plus que soutenir, elle guide la société.’

« Nous aurions pu penser que le chômage, l’exclusion, la pauvreté freineraient l’exhibition du discours publicitaire et feraient taire les sirènes de la surconsommation. Il n’en est rien. Qu’importe la « fracture » sociale, puisqu’on s’adresse à la majorité nantie ! Qu’importe si des centaines de milliers d’individus sont forcées de contempler chaque jour des modèles d’existence qui leur sont rendues inaccessibles par leur exclusion ! On ne s’émeut pas de cette violence quotidienne. Après tout, pourquoi refuserait-on aux pauvres de rêver à ce que possèdent les riches : n’est-ce pas ce qui se fait dans le tiers-monde ? A l’ordre économique, qui a pour effet d’exclure les pauvres, s’adjoint désormais l’ordre publicitaire, qui a pour fonction de nous les faire oublier. »

Avilir l’être humain, le réduire à un simple rouage

« La « culture publicité » n’existe pas, la publicité c’est l’anti-culture. Si la culture nous humanise et ré-enchante le monde, la publicité réduit l’homme à un tube digestif dont l’unique fonction est de consommer. Avec la culture, l’homme devient autonome tandis qu’avec la publicité, plus on y est soumis, plus on devient « accro» et conditionné. Loin d’essayer de tendre à la réflexion des individus, la publicité infantilise nos actes et ce dès l’enfance, en fidélisant les enfants. Il est tellement simple d’exploiter commercialement les rêves des jeunes que cela s’apparente à du dressage tant ce qui est inculqué précocement reste comme une valeur intangible pour l’individu ainsi formaté. Les publicitaires ont l’ambition de nous conditionner et d’occuper l’espace psychique. Souvenons-nous de cet ancien patron de TF1 (chaîne de télévision le plus regardée en Europe) qui expliquait que son travail consistait à élaborer des programmes pour préparer les esprits au message publicitaire. »

« La publicité est (…) une monstrueuse opération de formatage qui vise à faire régresser l’individu afin d’en faire un être docile dépourvue de sens critique et facilement manipulable. Au-delà de sa fonction commerciale immédiate, la publicité est donc bien un enjeu véritable car elle participe à faire régresser l’humain, que ce soit dans son action ou sa réflexion. Elle rabaisse les personnes au rang de simples consommateurs qui commencent par consommer des produits puis consomment d’autres humains (management, violence, exploitations diverses …) et finissent par se consommer eux-mêmes (dopages, sectes, opérations de chirurgie esthétique …). »

Uniformiser les modes de vie et détruire les cultures

« La publicité, loin d’une culture en soi, détruit les cultures existantes pour uniformiser les besoins et les comportements. Elle façonne des identités communes à tous en simulant des différences qui sont davantage des moyens pour nous faire croire que nous avons le choix.

D’ailleurs, la diversité des cultures du monde dérange la publicité puisqu’elle peut être considérée comme un frein à la soif de profits des annonceurs. La publicité veut détruire les cultures en imposant des produits et des modes de vie standardisés sur toute la surface de la Terre. Chacun sur la planète devra consommer de façon identique et beaucoup. La publicité ne supporte pas les peuples qui veulent faire de la diversité de leur culture une richesse. La publicité veut créer un monde non pas universel, mais uniforme, tout en glorifiant, de façon trompeuse, la différence, quand elle ne rêve que d’indifférenciation. »

Une pollution et un gaspillage immenses

« L’omniprésence de la publicité nous oppresse, tout comme elle nous conditionne. Elle normalise nos comportements et représentent autant une occupation illicite de notre espace qu’un gaspillage intolérable d’autant plus qu’il n’est souvent ni voulu, ni attendu.

La publicité, par son essence même, contribue au pillage perpétré par le capitalisme du désastre, notamment des ressources naturelles. En outre, en produisant son verbiage malsain, la publicité pollue de multiples façons notamment par une pollution visuelle et énergétique. »

« De façon intrinsèque, la publicité participe au pillage planétaire organisée par le capitalisme. En effet, le système publicitaire monopolise toujours plus l’espace public. Il parasite les activités culturelles et pervertit les manifestations sportives. Par ailleurs, la publicité sacrifie la santé et l’écosystème au commerce, occulte les conséquences sanitaires, et se moque du principe de précaution (en vendant tout produit, peu importe le risque). La publicité incite à la surconsommation, au gaspillage, à la pollution et fait souvent l’apologie de comportements irresponsables et individualistes. Elle est source de surendettement, de délinquance et de violence pour les plus démunis et les plus réactifs à son discours intrusifs.

« La publicité a toujours privilégié le support de l’affichage à tel point que, désormais, les affiches publicitaires sont incrustées dans nos villes, nos campagnes et nos transports. Elles sont omniprésentes, géantes, souvent illuminées et sont donc dévoreuses d’espace public et d’énergie.

Cette débauche graphique gêne la vue, salit notre cadre de vie, réduit notre liberté de penser et limite notre faculté de rêver. La confiscation de l’espace public et son exploitation mercantile sont d’autant plus inadmissibles que la loi qualifie les paysages de « bien commun de la nation » et que les dispositions régissant l’affichage publicitaire sont intégrées au livre V du Code de l’environnement, intitulé « Prévention des pollutions, des risques et des nuisances ». Ainsi, même le législateur considère l’affichage publicitaire comme une pollution ! Par l’affichage, le système publicitaire s’immisce dans notre quotidien de la façon la plus évidente et la plus violente également. »

Sortir de la publicité pour sortir de la société de croissance

« Pour sortir de la société de croissance, sortir de la publicité est un préalable obligatoire. Lutter contre la publicité est donc, avant tout, un combat idéologique. »

Extraits de l’article « Sortir de la publicité » de Christophe Ondet.

(L’article en intégralité : http://www.projet-decroissance.net/?p=342 )

Permalink
June 20, 2023 at 4:41:22 PM GMT+2

Why Does Everything On Netflix Look Like That?https://www.vice.com/en/article/ake3j5/why-does-everything-on-netflix-look-like-that

  • Cultural Downfall
  • Cultural Uniformism
  • Enshitification
  • Cultural Downfall
  • Cultural Uniformism
  • Enshitification

Why Does Everything On Netflix Look Like That?

Even if you don’t know what to look for, you’ll probably be able to guess if something was created for Netflix just based on a few frames.

Against all odds, Netflix’s adaptation of The Sandman is a very good show. But why does it look like that?

You know what I’m talking about—the so-called “Netflix Look.” Netflix’s in-house produced television shows and movies tend to all have the same look and feel, to the point that it’s sometimes really distracting. Although it’s hard to pinpoint what exactly makes all Netflix shows look the same, a few things stand out: The image in general is dark, and the colors are extremely saturated; Especially in scenes at night, there tends to be a lot of colored lighting, making everything look like it’s washed in neon even if the characters are inside; Actors look like the makeup is caked on their faces, and details in their costumes like puckering seams are unusually visible; Most annoying to me, everything is also shot in an extremely conventional way, using the most conventional set ups to indicate mystery or intrigue as possible—to indicate that something weird is going on the framing always has a dutch angle, for example—or more often just having everyone shot in a medium close up.

Much like you can instantly recognize a Syfy channel production by its heavy reliance on greenscreen but not as expensive computer-generated special effects, or a Hallmark movie by it’s bright, fluffy, pastel look, Netflix productions also have recognizable aesthetics. Even if you don’t know what to look for, it’s so distinct that you’ll probably be able to guess whether or not something was created for Netflix just based on a few frames.

The Sandman, despite having great writing and great acting, suffers from these aspects of the Netflix look. Although the main character’s domain is the world of dreams, often in the show dramatic moments are reduced to scenes of characters talking in a medium close up. Fans of the show have also gotten frustrated by the show’s aspect ratio, which makes the frames look like they’ve been stretched upward. Tom Sturridge’s face looks especially made up as Dream—his lips are so red they’re almost distracting. Worst of all are the muddy colors, especially because the comic that The Sandman is adapting had such an exuberant color palette.

J. D. Connor, an associate professor in Cinema and Media Studies at USC, told Motherboard that the reasons for the Netflix look are varied, but one important reason is that Netflix requests some basic technical specifications from all its productions, which include things like what cameras to use, Netflix’s minimum requirements for the resolution of the image, and what percentage of the production can use a non-approved camera.

“It started as a big topic in the cinematographer community,” Connor told Motherboard in a phone call. “Netflix had an accepted camera list for its Netflix branded products. The initial list, while there were ostensibly open parameters for what cameras might qualify, there really were only like two. And yes, you can do a ton within those parameters. But it meant that this was one way that the uniformity emerged, was through their real insistence on that.”

Netflix’s list of approved cameras on their Partner Help Center website now has a lot more cameras than just two. The company explained in a video why it has a list of approved cameras, with Netflix camera systems specialist Krys Pyrgrocki saying, unhelpfully, “One of the biggest priorities for us as a studio is helping our filmmakers do their very best work. We want our filmmakers to not just feel enabled, but also encouraged to use the latest and greatest capture technologies out there to tell their stories.”

Connor says that these cameras are important to Netflix products beyond just wanting creators to use new technology.

“The other thing that really drove a lot of this was, they did what they call future proofing their content. They wanted it all to be shot in 4K HDR, ” he said.

It isn’t a totally unreasonable idea to want to make sure Netflix content still looks good when 4K televisions become more common, but it does limit your options as a filmmaker in terms of what technology you can actually use. 4K video files are also extremely large, and when compressed through streaming, that compression changes how the image looks to the streamer. It’s also important to note that Netflix, which chargers customers more for the full 4K experience (a basic subscription costs $9.99 a month while the Premium “Ultra HD (4K)” subscription costs $19.99 a month), also has a financial incentive to increase the amount of 4K content in its catalog.

“When it gets compressed, and jams through the cable pipe, or the fiber to get to your television, Netflix takes as much information out of that as they can through compression in order to reduce the amount of data that's going through, so you have a smoother streaming experience,” he said. “One of the weird things that happens when you have a very high resolution image, in general, when you shrink the amount of information the edges get sharper.”

Connor said to think about it in terms of movies from the 70s, whose visual effects look great on a huge screen, because the film grain blurs some of the details, but much worse on a smaller television.

“But when you take a movie like the original Superman or something and put it on television, all the edges get really sharp, all the blue screen looks really hacky,” he said. ”Something quite similar happens when you take a big 4K image and you jam it through a massively compressed amount of data to put it on TV.”

All of this helps to explain why the Netflix productions look uncanny. But some of the unpolished details are due to a more mundane issue: money.

Connor described the budgets on Netflix projects as being high, but in an illusory way. This is because in the age of streaming, “above the line” talent like big name actors or directors get more of the budget that’s allotted to Netflix projects because they won’t get any backend compensation from the profits of the film or television show.

“They're over compensated at the beginning,” Connor said. “That means that all of your above the line talent now costs, on day one that the series drops, 130 percent of what it costs somewhere else. So your overall budget looks much higher, but in fact, what's happened is to try to save all that money, you pull it out of things like design and location.”

“So the pandemic hurts, the technology of capture and then post production standardization hurts, the budget point squeezes all the design side stuff, and that hurts,” Connor continued.

Connor pointed out that there are many projects on streaming services that skimp on things like production design, and that some of this is due to ongoing impacts from the pandemic. But it can be particularly noticeable in Netflix productions because it happens so often.

“Red Notice to me is like the pinnacle of this sort of thing I’m talking about. It cost a fortune because they had to pay the stars a ton. It was shot in the pandemic, so they're cutting around absences in ways that are at times very, very funny,” Connor continued. “And the whole thing just looks when I watched it on my TV, and I have a fairly good TV, I thought it looked just horrible, beginning to end. A sort of brutal experience.”

That’s not to say that the Netflix look is always bad. There are a lot of kinds of projects that Netflix makes, ranging from the prestige work of Martin Scorsese to schmaltzy young adult fare like The Kissing Booth. When you’re making a young adult romance story, the Netflix look doesn’t feel totally out of place. In fact, it’s not too far off from what shows produced for the CW, like Riverdale, already look like. When you’re watching The Sandman, which is based on a beloved and very experimental comic, it comes off as totally incongruous with the story that they’re trying to tell. The technical specifications that Netflix enforces on its productions wouldn’t feel so out of place in a different genre of story.

“It all, kind of, totally works with the Adam Sandler comedies,” Connor said. “The budget point is fine, because Adam Sandler gets all the money, and like, the things just look fine. Nobody is making really theatrical comedies anymore, that whole market segment is just vaporized. And you know, I kind of want to live in a world where there's a Hubie Halloween rolling out in mid October and my theaters but like, barring that…”

Television and movies also, generally speaking, don’t have to look like that. Connor repeatedly mentioned Tokyo Vice as an example of a show with particularly rich production design, and other works on HBO, like the drama Station Eleven and the comedy Rap Shit, also put a great deal of time and care into their visual presentation. Shows like The Bear on Hulu, nominally a comedy, is extremely considered in how it frames its characters, and builds out its kitchen set with a lot of personal details. As streaming television, these shows will also always suffer from what happens to images when they’re compressed—but these shows are also shot in ways where that’s not as noticeable to the streamer on the other side.

Permalink
June 20, 2023 at 4:20:08 PM GMT+2

The contagious visual blandness of Netflixhttps://haleynahman.substack.com/p/132-the-contagious-visual-blandness

  • Enshitification
  • Mass Consumption
  • Cultural Uniformism
  • Cultural Downfall
  • Enshitification
  • Mass Consumption
  • Cultural Uniformism
  • Cultural Downfall

The contagious visual blandness of Netflix

Last week I saw M3GAN, the new horror-comedy starring Allison Williams and a robot-doll in a blond wig. I liked it enough. The doll character is genuinely well-done—a seemingly hard-to-nail mix of creepy and campy—but I walked out of the theater with a vaguely empty feeling. I couldn’t quite place it until I started talking with my friends about where the movie was set, and I realized I had no idea. One answer is somewhere in Silicon Valley, given its bald critique of big tech. It didn’t actually feel like Silicon Valley, though. It didn’t feel like anywhere at all. (Update: I’ve been informed it’s set in Seattle, although it didn’t feel like there either.) Every backdrop was generic and crisp: the scrubbed tech-compound where Gemma (Allison Williams) works; the bland, Wayfair-decorated house she lives in; the clean, non-specific streets she drives on. I thought little of this while watching. The movie looked expensive and professional, or at least had the hallmarks of those things: glossy, filtered, smooth. Only after it ended did it occur to me that it seemed, like so many other contemporary movies and shows, to exist in a phony parallel universe we’ve come to accept as relevant to our own.

To be clear, this isn’t about whether the movie was “realistic.” Movies with absurd, surreal, or fantastical plots can still communicate something honest and true. It’s actually, specifically, about how movies these days look. That is, more flat, more fake, over-saturated, or else over-filtered, like an Instagram photo in 2012, but rendered in commercial-like high-def. This applies to prestige television, too. There are more green screens and sound stages, more CGI, more fixing-it-in-post. As these production tools have gotten slicker and cheaper and thus more widely abused, it’s not that everything looks obviously shitty or too good to feel true, it’s actually that most things look mid in the exact same way. The ubiquity of the look is making it harder to spot, and the overall result is weightless and uncanny. An endless stream of glossy vehicles that are easy to watch and easier to forget. I call it the “Netflix shine,” inspired by one of the worst offenders, although some reading on the topic revealed others call it (more boringly) the “Netflix look.”

In a 2022 Vice piece called “Why Does Everything on Netflix Look Like That,” writer Gita Jackson describes the Netflix look as unusually bright and colorful, or too dark, the characters lit inexplicably by neon lights, everything shot at a medium close-up. Jackson discovered this aesthetic monotony is in part due to the fact that Netflix requires the same “technical specifications from all its productions.” This is of course an economic choice: more consistency = less risk. They’ve also structured their budgets to favor pre-production costs like securing top talent. So despite the fact that their budgets are high, they’re spending it all on what is essentially marketing, pulling resources away from things like design and location. This style-over-substance approach is felt in most things Netflix makes, and it’s being replicated across the industry. (For more proof of concept, Rachel Syme’s recent New Yorker profile of Netflix Global Head of Television Bela Bajaria is perfectly tuned and genuinely chilling. I’m still thinking about her “Art is Truth” blazer and lack of jet lag despite constant world travel. She’s a walking metaphor.)

I’m not a film buff, so I write this from a layman’s perspective. But every time I watch something made before 2000, it looks so beautiful to me—not otherworldly or majestic, but beautiful in the way the world around me is beautiful. And I don’t think I’m just being nostalgic. Consider these two popular rom-com movies stills: The first from When Harry Met Sally, shot on film in 1989, the second from Moonshot, shot digitally in 2022.

The latter is more polished and “perfect,” but to what effect? It looks strange, surreal, both dim and bright at the same time. Everything is inexplicably blue or yellow, and glows like it’s been FaceTuned. Meg Ryan and Billy Crystal, meanwhile, are sitting in a downtown New York deli that actually exists. The image is a little grainy, the lighting falling somewhere in the normal daytime range, and they look like regular human beings. The table’s lopsided, the kitchen’s bent out of shape—the charm is earned. Today the restaurant might be built on a sound stage, or shot in front of a green screen, the appearance of daylight added in post-production. They could make it look convincing and moody, but it would lack character. It would feel somehow outside the world we inhabit every day, because it would be.

At the risk of using an anonymous Redditor as an expert, lol, I found a comment under a thread called “Why do movies look so weird now?” that captures a lot of these same complaints:

“Everyone is lit perfectly and filmed digitally on raw and tweaked to perfection. It makes everything have a fake feeling to it. Commercials use the same cameras and color correction so everything looks the same. Every shot looks like it could be used in a stock photo and it looks completely soulless. No film grain, no shadows on faces, and no wide shots. I have a theory that going from tungsten to LED lighting added to this as well. Tungsten allows for more accurate color in camera but LEDs are cheaper, cooler, and more convenient. So the solution is to film on a nice digital camera and fix the color in post. However, this makes for less creativity on set and less use of shadows. Green screens make it worse as they also require flatter lighting to work. Marvel films are very obviously mostly made in post and they all look very flat and not real. Even shitty low-budget 90's comedies look better and I think this can be attributed to the lighting.”

Another user mentioned that shooting on film required a level of forethought, planning, and patience that digital simply doesn’t. Similar to the predicament brought on by smartphone cameras and our now-endless photo rolls, the result is more, sure, and at higher fidelity, but not necessarily better. A photo today has never been worth less. I’ve long believed that constraints can improve creative work. But today’s shrinking production budgets, paired with the limitlessness of computer technology, aren’t inspiring scrappiness. They’re inspiring laziness. It’s too easy to fix things in post. Why wait around all day for the light to be just right when you can make it look half as good in Final Cut Pro for half the price? There’s an expansive possibility to digitization that defies the logic of constraint.

That the film and TV industry is obsessed with making as much money as possible isn’t a surprise. But as with any cost-cutting strategy, the approach is necessarily an expression of priorities. What’s worth the trouble? What isn’t? Looking at what studios are and aren’t willing to spend on today paints a pretty unflattering (if predictable) picture of modern values. And what’s interesting is how recognizable those values are across other pillars of culture. To name a few: the idea that imperfection is inhibitive to beauty; an over-emphasis on growth, speed, ease, and innovation; a cynical over-reliance on marketing; a lack of interest in locality and place; the funneling of resources to the top; the focus on content over form, entertainment over art. I could be talking about anything here—the beauty and cosmetics industry, tech, corporate America, manufacturing, social media, politics, labor disputes.

I’m not saying the proliferation of shitty-looking shows and movies will bring about our cultural downfall, only that they express, in a satisfyingly literal way, a specific wrong-think that’s pervading our off-screen lives, too. Most usefully, their hollowness offers, by way of counter-example, a key to what does feel meaningful: texture, substance, imperfection, slowing down, taking the scenic route, natural light, places you can touch, making more considered creative choices, making less. There’s a certain momentum to the mid right now, but there are other ways forward, if we’re willing to indulge them.

Permalink
June 20, 2023 at 4:08:24 PM GMT+2
Links per page
  • 20
  • 50
  • 100
130 shaares · Shaarli · The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community · Documentation · Theme : Stack · Font : DINish
Fold Fold all Expand Expand all Are you sure you want to delete this link? Are you sure you want to delete this tag? The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community