Delete Set public Set private Add tags Delete tags
  Add tag   Cancel
  Delete tag   Cancel
  • Cyberpunk is the way
  • My links
  • Tags
  • pics
  • Daily
  • RSS
  • Login
Filter untagged links
31 results tagged Social Network

Comment la Russie influence les élections en France grâce à une armée de botshttps://www.futura-sciences.com/tech/actualites/guerre-futur-russie-influence-elections-france-grace-armee-bots-114424/

  • War
  • Disinformation War
  • Media Manipulation
  • Social Network
  • PsyOps
  • War
  • Disinformation War
  • Media Manipulation
  • Social Network
  • PsyOps

Comment la Russie influence les élections en France grâce à une armée de bots

L’étude du docteur Chavalarias montre que les campagnes d’influence sur X cherchent à affaiblir le front républicain au profit de l’extrême droite. © Login, Adobe Stock (image générée avec IA)

Une étude d’un chercheur du CNRS révèle les manœuvres de déstabilisation menées par le Kremlin sur les réseaux sociaux pour faire monter l’extrême droite en France. Les narratifs poussés par la Russie suffisent-ils pour manipuler l’opinion des électeurs ?

par Sylvain Biget le 5 juillet 2024 pour Futura-science

« La goutte d'eau creuse la pierre, non par force, mais en tombant souvent. » Cette ancienne devise du KGB reste d'actualité avec un président Poutine élu à vie et qui mise sur le long terme pour influencer par petits pas les élections dans les démocraties occidentales. Le champ de bataille des manœuvres d'ingérence russes se situe sur les réseaux sociaux, et cela ne date pas d'hier. L'idée motrice consiste à affaiblir l'Union européenne et l'Otan en manipulant les opinions des populations des pays membres.

Pour y parvenir, le Kremlin cherche à aider à élire des dirigeants moins hostiles au régime russe. C'est le cas de l'extrême droite française et notamment du RN qui a bénéficié d'un emprunt russe pour ses campagnes et s'est systématiquement opposé ou abstenu dès qu'il s'agissait de soutenir la résistance ukrainienne ou d'infliger des sanctions à la Russie suite à l'invasion de l'Ukraine.

La tactique est connue depuis longtemps, puisque dès les élections présidentielles de 2017, des opérations de piratage des boîtes e-mail de l'équipe de campagne d'Emmanuel Macron avaient été menées pour tenter de discréditer le candidat. La manœuvre était alors soutenue par des armées de bots menant des campagnes intensives d'astrosurfing pour amplifier un narratif favorable à l'élection de Marine Le Pen. Aujourd'hui, une récente étude, menée par le chercheur du CNRS David Chavalarias, analyse les techniques de déstabilisation employées par le Kremlin pour manipuler l'opinion des électeurs afin qu'ils votent pour le RN dans le cadre des élections législatives anticipées.

L'auteur explique que déjà, dès 2023, lors des campagnes pour les élections européennes, des publicités ciblées ont été achetées sur Facebook pour pousser les messages allant à l'encontre des gouvernements, ou le soutien des pays occidentaux à la résistance ukrainienne. Ces campagnes se sont amplifiées à l'approche des élections. Par exemple, de fausses publicités de recrutement de soldats pour l'armée française afin d'aller combattre en Ukraine ont abondamment circulé. Elles visaient à renforcer les propos d'Emmanuel Macron sur l'envoi de troupes en Ukraine pour le faire passer pour un va-t-en-guerre.

La représentation graphique des communautés politiques. En nuances de rouge, les partis liés à la gauche. Ceux de droite sont représentés en bleu. Les filaments représentent les flux d’échanges et de partages de publications entre comptes X. © CNRS

La représentation graphique des communautés politiques. En nuances de rouge, les partis liés à la gauche. Ceux de droite sont représentés en bleu. Les filaments représentent les flux d’échanges et de partages de publications entre comptes X. © CNRS

Un pouvoir de nuisance surestimé ?

Mais c'est sur le réseau X que les manœuvres des bots russes et des propagateurs de narratifs prorusses sont les plus abondantes. Depuis 2016, une armée de faux comptes diffuse régulièrement des concepts reposant sur des mots-clés poussant au clivage. C'est ainsi que le terme « islamo-gauchiste », qui venait de nulle part, a été propulsé et a été repris par des ministres et fait l'objet de débats.

Mais plus globalement, selon le chercheur, le Kremlin a employé trois stratégies : pousser la normalisation de l'extrême droite, faire en sorte que les partis du front républicain ne puissent plus s'entendre et surtout engendrer le rejet des partis modérés par les électeurs afin qu'ils votent à l'extrême droite. Pour y parvenir, les bots russes surfent aussi sur l'actualité et notamment la guerre à Gaza en diffusant des images horribles de l'attaque du Hamas du 7 octobre.

Une façon de faire monter l'islamophobie, pointer du doigt l'antisémitisme de certains partis et augmenter les discours radicaux entre l'extrême droite et l'extrême gauche. Pour accentuer l'effet, des bots faisant la promotion de l'islamisme politique ont été créés. Ces manœuvres, qui sont toujours en cours, visent toujours à amplifier l'adhésion au RN d'un maximum d'électeurs pour le second tour des législatives de ce dimanche.

Si l'analyse du chercheur est solide, reste à savoir si cette stratégie du Kremlin pèse vraiment sur le choix des électeurs. Il n'existe pour l'heure aucune étude aboutie sur les effets concrets auprès du public de ce genre de manipulation. Sans pour autant sous-estimer le pouvoir de ces opérations, rien que la montée en puissance du réseau de médias conservateurs du groupe Bolloré peut expliquer bien des choses.

Permalink
July 7, 2024 at 4:05:53 PM GMT+2

Témoignages. Dans l’enfer des “nettoyeurs” des réseaux sociauxhttps://www.asahi.com/articles/ASS4W4287S4WUTIL01YM.html?iref=pc_ss_date_article

  • Gig Worker
  • Artificial intelligence
  • Social Network
  • Digital Society
  • Censorship
  • Gig Worker
  • Artificial intelligence
  • Social Network
  • Digital Society
  • Censorship

Témoignages. Dans l’enfer des “nettoyeurs” des réseaux sociaux

Alors que les géants du numérique tentent de renforcer le contrôle sur leurs plateformes, les “modérateurs de contenu” sont exposés à d’innombrables posts violents ou haineux dans le cadre leur travail. Le quotidien japonais “Asahi Shimbun” est allé à leur rencontre.

Publié le 27 juin 2024 à 05h00 Shiori Tabuchi, Azusa Ushio

Ces vidéos prolifèrent sur la Toile. Violences, menaces, actes sexuels… Pourtant, ils n’ont que deux ou trois minutes pour décider de les supprimer ou non.

Nous sommes dans un immeuble, dans une ville d’Asie du Sud-Est. Dans une salle, assis en silence devant leur ordinateur, casque sur les oreilles, des modérateurs de contenu, surnommés “nettoyeurs des réseaux sociaux”, suppriment les publications Internet jugées inappropriées.

Parmi eux, un Japonais, employé par un sous-traitant d’un géant du numérique qui exploite un site de partage de vidéos, a accepté de répondre à nos questions, à condition de ne divulguer ni son nom, ni son âge :

“On m’interdit de parler en détail du contenu de mon travail.”

Il travaille en trois-huit avec des équipes constituées par langue pour un salaire mensuel d’environ 200 000 yens [1 200 euros]. Soumis à une stricte confidentialité, il n’a pas le droit d’apporter son smartphone dans la salle, ni même un simple stylo.

Lorsqu’il arrive à son poste, il allume ses deux écrans. Sur l’un d’eux, une vidéo passe en vitesse rapide. L’autre affiche les nombreuses règles de modération à appliquer, un document qui semble faire un millier de pages. Lorsqu’il repère un contenu proscrit, il classe la vidéo dans une catégorie, par exemple “violence”, “porno”, “harcèlement” ou “haine”. Et cherche la règle qu’elle enfreint et copie cette dernière dans le champ des commentaires. “La chose essentielle est de la trouver aussi vite que possible”, explique-t-il.

Lorsqu’il a fini de vérifier une vidéo, la suivante apparaît. Outre les contenus signalés par des utilisateurs, “il y a probablement des publications détectées automatiquement par l’intelligence artificielle (IA), mais je ne sais pas comment elles sont choisies”.

Jeu du chat et de la souris

Si une vidéo montre une personne battue jusqu’au sang ou contient des menaces du genre “Je vais le tuer”, il la supprime immédiatement. En cas de doute, il envoie la vidéo à un service spécialisé. Sur les quelque 80 vidéos qu’il visionne par jour, il en supprime environ trois. Il y en a également une dizaine qu’il trouve difficiles à juger. Il ignore combien il y a de services au total, et qui prend les décisions en définitive. “Je procède de manière mécanique”, confie-t-il.

Il se souvient d’un pic d’activité après l’assassinat par balle de l’ancien Premier ministre Shinzo Abe [en juillet 2022]. Des images de la scène ont été publiées à de nombreuses reprises. “J’effaçais les vidéos non floutées pratiquement les unes après les autres.”

Les règles de modération sont nombreuses et détaillées, et les changements sont annoncés chaque semaine lors de réunions matinales. Est également fournie une base de données rassemblant les mots tabous. À la fin de chaque journée de travail, les modérateurs passent un test visant à évaluer leur connaissance des dernières règles : ceux qui obtiennent un faible score voient leur salaire réduit.

Les vidéos supprimées sont fréquemment republiées, et certains contenus passent entre les mailles du filet. Notre modérateur est conscient des critiques :

“Nous faisons de notre mieux, mais c’est comme le jeu du chat et de la souris. Nous ne pouvons pas effacer toutes les vidéos. Celles qui ne sont pas signalées restent.”

Le géant du numérique qui assure ce service de modération soutenait autrefois qu’il ne faisait que fournir un “lieu” d’expression et n’était pas responsable des contenus publiés. Mais la prolifération des publications nuisibles l’a contraint à réagir et à renforcer sa surveillance.

Le règlement sur les services numériques (Digital Services Act, DSA), adopté par l’Union européenne (UE), oblige aujourd’hui les grandes plateformes Internet à supprimer les publications nuisibles, notamment les contenus discriminatoires et les fausses informations. Si beaucoup sont supprimées automatiquement par l’IA, certaines nécessitent une intervention humaine. Selon les rapports que la Commission européenne a demandé aux géants du numériques de présenter en octobre dernier, Facebook a supprimé en Europe près de 47 millions de contenus contrevenant à la réglementation au cours des cinq mois qui ont suivi la fin avril 2023. Et 2,83 millions d’entre eux, soit 6 %, ont été supprimés par des modérateurs.

“Soldats des réseaux”

Facebook emploie environ 15 000 modérateurs et X environ 2 300. TikTok en compte environ 40 000, chargés notamment de contrôler les vidéos populaires qui dépassent un certain nombre de vues et de supprimer celles qui posent problème.

“Les modérateurs sont les soldats qui œuvrent dans l’ombre des réseaux sociaux”, estime Kauna Malgwi, 30 ans, qui vit aujourd’hui à Abuja, la capitale du Nigeria. Il y a cinq ans, alors qu’elle était une mère célibataire en situation précaire, elle est partie étudier au Kenya. Elle y a accepté ce qui était présenté comme un “poste d’interprète dans un ‘service clientèle’ utilisant le haoussa”, l’une des langues qui comptent le plus grand nombre de locuteurs en Afrique de l’Ouest. En réalité, elle s’est retrouvée modératrice pour Meta, qui exploite Facebook et Instagram. En parallèle à ses études de troisième cycle, pendant environ quatre ans, jusqu’en mars 2023, elle a travaillé neuf heures par jour, cinq jours par semaine, pour la succursale kenyane d’un sous-traitant du géant du numérique américain.

Expérience traumatisante

La première vidéo qu’elle a visionnée montrait un homme chutant du 15e étage d’un immeuble. Devant l’effroyable spectacle du corps s’écrasant au sol, elle a sauté de sa chaise. Elle devait remplir un questionnaire pyramidal énonçant les motifs de suppression du haut vers le bas. Après avoir répondu par la négative à la première question – “Voit-on des corps nus ?” –, elle a coché les cases “Voit-on des viscères ?” et “Voit-on du sang ?”.

Agressions sexuelles sur des enfants en bas âge, exécutions par des groupes extrémistes, suicides par balle… Chaque jour, elle examinait un millier de vidéos, détectées par l’IA ou signalées par des utilisateurs, et avait un maximum de cinquante-cinq secondes par vidéo pour décider de leur suppression ou non.

Elle supprimait également des textes à caractère raciste et d’autres messages de haine contenant des mots spécifiques.

“Il n’y avait pas que les textes. Par exemple, un dessin représentant un Asiatique et un singe côte à côte avec la légende ‘deux frères’ devait être supprimé.”

Elle a même supprimé des contenus publiés en Asie du Sud-Est, à plusieurs milliers de kilomètres de là.

Elle gagnait 60 000 shillings kényans (environ 400 euros) par mois, ce qui correspond au revenu mensuel moyen au Kenya. Mais elle souffrait à la fois d’insomnie et de trouble panique, ce qui l’a conduite plusieurs fois à l’hôpital.

Les accords de confidentialité ne lui ont même pas permis de se confier à sa famille. Ses collègues, les seuls avec lesquels elle pouvait partager ses sentiments, fumaient du cannabis pendant leurs pauses pour échapper à la réalité. Certains ont même avoué envisager le suicide. “C’est certes un travail important de protéger les nombreux utilisateurs de ces institutions que sont devenus les réseaux sociaux, mais quand même…” Aujourd’hui encore, il lui arrive de pleurer en repensant aux images qu’elle a vues.

Permalink
June 27, 2024 at 10:32:53 PM GMT+2

Why the Internet Isn’t Fun Anymore | The New Yorkerhttps://www.newyorker.com/culture/infinite-scroll/why-the-internet-isnt-fun-anymore

  • Enshitification
  • Social Network
  • Enshitification
  • Social Network

Why the Internet Isn’t Fun Anymore

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over.

By Kyle Chayka October 9, 2023

https://media.newyorker.com/photos/652058958ddd02a889a30583/master/w_2240,c_limit/Chayka_Internet_still.jpg

Lately on X, the platform formerly known as Twitter, my timeline is filled with vapid posts orbiting the same few topics like water whirlpooling down a drain. Last week, for instance, the chatter was dominated by talk of Taylor Swift’s romance with the football player Travis Kelce. If you tried to talk about anything else, the platform’s algorithmic feed seemed to sweep you into irrelevance. Users who pay for Elon Musk’s blue-check verification system now dominate the platform, often with far-right-wing commentary and outright disinformation; Musk rewards these users monetarily based on the engagement that their posts drive, regardless of their veracity. The decay of the system is apparent in the spread of fake news and mislabelled videos related to Hamas’s attack on Israel.

Elsewhere online, things are similarly bleak. Instagram’s feed pushes months-old posts and product ads instead of photos from friends. Google search is cluttered with junky results, and S.E.O. hackers have ruined the trick of adding “Reddit” to searches to find human-generated answers. Meanwhile, Facebook’s parent company, Meta, in its latest bid for relevance, is reportedly developing artificial-intelligence chatbots with various “sassy” personalities that will be added to its apps, including a role-playing D. & D. Dungeon Master based on Snoop Dogg. The prospect of interacting with such a character sounds about as appealing as texting with one of those spam bots that asks you if they have the right number.

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over. The precipitous decline of X is the bellwether for a new era of the Internet that simply feels less fun than it used to be. Remember having fun online? It meant stumbling onto a Web site you’d never imagined existed, receiving a meme you hadn’t already seen regurgitated a dozen times, and maybe even playing a little video game in your browser. These experiences don’t seem as readily available now as they were a decade ago. In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems. When those platforms decay, as Twitter has under Elon Musk, there is no other comparable platform in the ecosystem to replace them. A few alternative sites, including Bluesky and Discord, have sought to absorb disaffected Twitter users. But like sproutlings on the rain-forest floor, blocked by the canopy, online spaces that offer fresh experiences lack much room to grow.

One Twitter friend told me, of the platform’s current condition, “I’ve actually experienced quite a lot of grief over it.” It may seem strange to feel such wistfulness about a site that users habitually referred to as a “hellsite.” But I’ve heard the same from many others who once considered Twitter, for all its shortcomings, a vital social landscape. Some of them still tweet regularly, but their messages are less likely to surface in my Swift-heavy feed. Musk recently tweeted that the company’s algorithm “tries to optimize time spent on X” by, say, boosting reply chains and downplaying links that might send people away from the platform. The new paradigm benefits tech-industry “thread guys,” prompt posts in the “what’s your favorite Marvel movie” vein, and single-topic commentators like Derek Guy, who tweets endlessly about menswear. Algorithmic recommendations make already popular accounts and subjects even more so, shutting out the smaller, more magpie-ish voices that made the old version of Twitter such a lively destination. (Guy, meanwhile, has received so much algorithmic promotion under Musk that he accumulated more than half a million followers.)

The Internet today feels emptier, like an echoing hallway, even as it is filled with more content than ever. It also feels less casually informative. Twitter in its heyday was a source of real-time information, the first place to catch wind of developments that only later were reported in the press. Blog posts and TV news channels aggregated tweets to demonstrate prevailing cultural trends or debates. Today, they do the same with TikTok posts—see the many local-news reports of dangerous and possibly fake “TikTok trends”—but the TikTok feed actively dampens news and political content, in part because its parent company is beholden to the Chinese government’s censorship policies. Instead, the app pushes us to scroll through another dozen videos of cooking demonstrations or funny animals. In the guise of fostering social community and user-generated creativity, it impedes direct interaction and discovery.

According to Eleanor Stern, a TikTok video essayist with nearly a hundred thousand followers, part of the problem is that social media is more hierarchical than it used to be. “There’s this divide that wasn’t there before, between audiences and creators,” Stern said. The platforms that have the most traction with young users today—YouTube, TikTok, and Twitch—function like broadcast stations, with one creator posting a video for her millions of followers; what the followers have to say to one another doesn’t matter the way it did on the old Facebook or Twitter. Social media “used to be more of a place for conversation and reciprocity,” Stern said. Now conversation isn’t strictly necessary, only watching and listening.

Posting on social media might be a less casual act these days, as well, because we’ve seen the ramifications of blurring the border between physical and digital lives. Instagram ushered in the age of self-commodification online—it was the platform of the selfie—but TikTok and Twitch have turbocharged it. Selfies are no longer enough; video-based platforms showcase your body, your speech and mannerisms, and the room you’re in, perhaps even in real time. Everyone is forced to perform the role of an influencer. The barrier to entry is higher and the pressure to conform stronger. It’s no surprise, in this environment, that fewer people take the risk of posting and more settle into roles as passive consumers.

The patterns of life offscreen affect the makeup of the digital world, too. Having fun online was something that we used to do while idling in office jobs: stuck in front of computers all day, we had to find something on our screens to fill the down time. An earlier generation of blogs such as the Awl and Gawker seemed designed for aimless Internet surfing, delivering intermittent gossip, amusing videos, and personal essays curated by editors with quirky and individuated tastes. (When the Awl closed, in 2017, Jia Tolentino lamented the demise of “online freedom and fun.”) Now, in the aftermath of the pandemic, amid ongoing work-from-home policies, office workers are less tethered to their computers, and perhaps thus less inclined to chase likes on social media. They can walk away from their desks and take care of their children, walk their dog, or put their laundry in. This might have a salutary effect on individuals, but it means that fewer Internet-obsessed people are furiously creating posts for the rest of us to consume. The user growth rate of social platforms over all has slowed over the past several years; according to one estimate, it is down to 2.4 per cent in 2023.

That earlier generation of blogs once performed the task of aggregating news and stories from across the Internet. For a while, it seemed as though social-media feeds could fulfill that same function. Now it’s clear that the tech companies have little interest in directing users to material outside of their feeds. According to Axios, the top news and media sites have seen “organic referrals” from social media drop by more than half over the past three years. As of last week, X no longer displays the headlines for articles that users link to. The decline in referral traffic disrupts media business models, further degrading the quality of original content online. The proliferation of cheap, instant A.I.-generated content promises to make the problem worse.

Choire Sicha, the co-founder of the Awl and now an editor at New York, told me that he traces the seeds of social media’s degradation back a decade. “If I had a time machine I’d go back and assassinate 2014,” he said. That was the year of viral phenomena such as Gamergate, when a digital mob of disaffected video-game fans targeted journalists and game developers on social media; Ellen DeGeneres’s selfie with a gaggle of celebrities at the Oscars, which got retweeted millions of times; and the brief, wondrous fame of Alex, a random teen retail worker from Texas who won attention for his boy-next-door appearance. In those events, we can see some of the nascent forces that would solidify in subsequent years: the tyranny of the loudest voices; the entrenchment of traditional fame on new platforms; the looming emptiness of the content that gets most furiously shared and promoted. But at that point they still seemed like exceptions rather than the rule.

I have been trying to recall the times I’ve had fun online unencumbered by anonymous trolling, automated recommendations, or runaway monetization schemes. It was a long time ago, before social networks became the dominant highways of the Internet. What comes to mind is a Web site called Orisinal that hosted games made with Flash, the late interactive animation software that formed a significant part of the kitschy Internet of the two-thousands, before everyone began posting into the same platform content holes. The games on the site were cartoonish, cute, and pastel-colored, involving activities like controlling a rabbit jumping on stars into the sky or helping mice make a cup of tea. Orisinal was there for anyone to stumble upon, without the distraction of follower counts or sponsored content. You could e-mail the site to a friend, but otherwise there was nothing to share. That old version of the Internet is still there, but it’s been eclipsed by the modes of engagement that the social networks have incentivized. Through Reddit, I recently dug up an emulator of all the Orisinal games and quickly got absorbed into one involving assisting deer leaping across a woodland gap. My only reward was a personal high score. But it was more satisfying, and less lonely, than the experience these days on X. ♦

Permalink
June 20, 2024 at 11:16:09 PM GMT+2

L’effondrement de l’information ? | Hubert Guillaudhttps://hubertguillaud.wordpress.com/2024/01/11/leffondrement-de-linformation/

  • Social Network
  • Digital Society
  • Press
  • Enshitification
  • Social Network
  • Digital Society
  • Press
  • Enshitification

L’effondrement de l’information ?

img

Depuis Cambridge Analytica, Trump, le Brexit et le Covid, l’information est devenue un problème pour les réseaux sociaux… Sommés par les autorités d’arbitrer la vérité, la plupart d’entre eux semblent désormais se réfugier en-dehors de l’information, pour devenir des lieux d’accomplissement de soi rétifs à la politique. C’est certainement ce qui explique le recul de l’information dans les flux des utilisateurs, analyse pertinemment Charlie Warzel pour The Atlantic. Comme le déclarait récemment le New York Times : « Les principales plateformes en ligne sont en train de rompre avec l’information ».

Les plateformes de réseaux sociaux ont longtemps influencé la distribution de l’information, par exemple, en poussant les médias à se tourner vers la vidéo, comme l’a fait Facebook en 2015, en surestimant volontairement le temps moyen que les utilisateurs passaient à regarder des vidéos pour pousser les médias à basculer vers la production de contenus vidéos. Aujourd’hui, elles se détournent de l’information pour le divertissement et la publicité. Mais il n’y a pas qu’elles, les lecteurs eux-mêmes semblent atteindre un plafond informationnel, qui les pousse à se détourner de l’info, rapporte le Pew Research Center. La consommation d’information, particulièrement anxiogène, a plongé depuis 2020. Beaucoup se sont tournés vers des contenus plus faciles, comme ceux produits par les influenceurs. “La confiance des consommateurs ne repose pas nécessairement sur la qualité du reportage ou sur le prestige et l’histoire de la marque, mais sur des relations parasociales fortes”, constate Warzel. En 2014 – l’époque faste de l’actualité sociale – 75 % des adultes américains interrogés par le Pew déclaraient qu’Internet et les médias sociaux les avaient aidés à se sentir plus informés. Ce n’est plus le cas.

Avec l’accélération algorithmique de l’information dans les réseaux sociaux, les cycles d’actualité sont devenus plus rapides : Twitter est ainsi devenu le rédacteur en chef des sujets les plus chauds que les médias devaient traiter, dans une boucle de renforcement des sujets populaires, à l’image des tweets de Donald Trump que tous les médias commentaient. De 2013 à 2017, l’actualité est devenue l’essence faisant tourner les réseaux sociaux, transformant peu à peu l’information en champ de bataille… Beaucoup d’utilisateurs s’en sont alors détournés. De nouveaux réseaux sociaux ont explosé, à l’image de TikTok et les plus anciens réseaux se sont adaptés, Facebook notamment… Une récente enquête de Morning Consult a montré que « les gens aimaient davantage Facebook maintenant qu’il y avait moins d’actualité ».

Les commentaires sur l’actualité comme l’information ne vont pas entièrement disparaître, estime Warzel, mais les médias viennent de perdre de leur influence culturelle. Pour John Herrman dans le New Yorker, la campagne présidentielle de 2024 aux Etats-Unis risque d’être la première sans médias pour façonner les grands récits politiques. “Les réseaux sociaux ont fait ressortir le pire dans le secteur de l’information, et les informations, à leur tour, ont fait ressortir le pire dans de nombreux réseaux sociaux”. L’alliance entre réseaux sociaux et information a vécu. Reste à savoir ce que le monde de l’influence va produire… dans un monde où la force de l’écrit et la structuration de l’information semblent s’estomper du fait de machines à recommandation qui ne sont plus bâties pour eux.

La fin d’un monde commun
Dans un second article, Warzel revient sur cette disparition de l’information… Pour lui, l’internet est désormais fragmenté par les recommandations sociales qui font que nous ne partageons pas grand-chose de ce que les autres consomment. “La notion même de popularité est sujette à débat” : plus personne ne sait vraiment si telle tendance est aussi virale qu’affichée. Difficultés à comparer les métriques, recommandations opaques, fermeture des sites d’information par les paywalls, chute de la pertinence des informations sur les médias sociaux et envahissement publicitaire… Nous ne comprenons plus ce qu’il se passe en ligne. Vous n’avez probablement jamais vu les vidéos les plus populaires de TikTok de l’année, pas plus que les contenus les plus vus de Facebook ! Et pas grand monde n’avait parlé de l’émission la plus populaire de Netflix, The Night Agent ! D’un côté, les contenus populaires sont plus viraux que jamais, de l’autre ces popularités sont plus cloisonnées que jamais ! Les comparaisons d’audience entre contenus et plateformes deviennent particulièrement complexes à décoder. Par exemple, la polémique récente sur le succès d’audience auprès de jeunes américains d’un discours de Ben Laden n’a pas été aussi virale que beaucoup l’ont dit, comme l’ont démontré le Washington Post ou Ryan Broderick. Un peu comme si nous étions entrés dans un moment de grande confusion sur la viralité, avec des métriques de vues que l’on compare d’une plateforme l’autre, alors que leurs publics et principes d’auto-renforcement sont très différents. Le fait que les plateformes ferment l’accès à leurs métriques et à la recherche n’aide pas à y voir clair, bien sûr. Sans échelle de comparaison, sans moyens pour voir ce qui circule et comment, nous devenons aveugles à tous les phénomènes. Et notamment à l’un d’entre eux : la manipulation de l’information par des puissances étrangères…

img

Ces transformations ne sont pas encore achevées ni digérées qu’une autre se profile, estimait James Vincent pour The Verge : “l’ancien web est en train de mourir et le nouveau web a du mal à naître”. La production de textes, d’images, de vidéos et de sons synthétiques vient parasiter cet écosystème en recomposition. Accessibles directement depuis les moteurs de recherches, les productions de l’IA viennent remplacer le trafic qui menait jusqu’à l’information. “L’IA vise à produire du contenu bon marché depuis le travail d’autrui”. Bing AI ou Bard de Google pourraient finalement venir tuer l’écosystème qui a fait la valeur des moteurs de recherche, en proposant eux-même leur propre “abondance artificielle”. Certes, ce ne sera pas la première fois que l’écosystème de l’information se modifie : Wikipédia a bien tué l’Encyclopédie Britannica. Mais, pour James Vincent, si depuis l’origine le web structure la grande bataille de l’information en modifiant les producteurs, les modalités d’accès et les modèles économiques… cette nouvelle configuration qui s’annonce ne garantit pas que le système qui arrive soit meilleur que celui que nous avions.

“L’internet n’est plus amusant”, déplorait Kyle Chayka pour le New Yorker. A force d’ajustements algorithmiques, les réseaux sociaux sont devenus parfaitement chiants !, expliquait Marie Turcan de Numérama, dénonçant le web de l’ennui ! L’invisibilisation des liens externes et plus encore de l’écrit par rapport à la vidéo, semble achever ce qu’il restait de qualité, comme le rapporte David-Julien Rahmil pour l’ADN. Dans un autre article, Rahmil rappelle que les échanges directs ont pris le pas sur les échanges publics : “La publicité omniprésente, l’exacerbation des tensions politiques, la culture du clash perpétuel et la sensation de burn-out informationnel ont sans doute précipité la chute des grandes plateformes sociales.” Désormais, chaque plateforme ne travaille plus que pour elle-même. Dans une internet plus fragmenté que jamais, chaque plateforme va faire émerger ses propres professionnels, ses propres influenceurs et il est bien probable qu’ils ne se recoupent plus d’une plateforme l’autre.

img

Quant aux réseaux sociaux, ils se sont dévalorisés eux-mêmes, à l’image de Twitter, qui a longtemps incarné le fil d’actualité en temps réel, le lieu central d’une conversation influente et un peu élitiste, explique Nilay Patel pour The Verge. C’est “l’effondrement du contexte qui a rendu Twitter si dangereux et si réducteur, mais c’était aussi ce qui le rendait passionnant”. La plateforme a rendu ses utilisateurs plus rapides et plus agiles, mais également trop réactifs. Les marques se sont éloignées des médias pour gérer elles-mêmes leurs présences sociales. “En prenant du recul maintenant, vous pouvez voir exactement à quel point cette situation a été destructrice pour le journalisme : les journalistes du monde entier ont fourni gratuitement à Twitter des informations et des commentaires en temps réel, apprenant de plus en plus à façonner des histoires pour l’algorithme plutôt que pour leurs véritables lecteurs. Pendant ce temps, les sociétés de médias pour lesquelles ils travaillaient étaient confrontées à un exode de leurs plus gros clients publicitaires vers des plateformes sociales offrant des produits publicitaires de meilleure qualité et plus intégrés, une connexion directe avec le public et aucune éthique éditoriale contraignante. Les informations sont devenues de plus en plus petites, même si les histoires ont pris de l’ampleur.” Tout le monde y était journaliste, alors que le secteur de l’information lui-même se tarissait. “Twitter a été fondé en 2006. Depuis cette année-là, l’emploi dans les journaux a chuté de 70% et les habitants de plus de la moitié des comtés américains ont peu ou plus d’informations locales”. Avec la pandémie, Trump, Black Live Matter, Twitter a atteint un point de bascule, s’effondrant sous son propre pouvoir. L’audience a commencé à refluer sous sa toxicité. Pour Patel, la prise de pouvoir de Musk sur la plateforme est une réaction au recul du pouvoir des célébrités et des gens de la tech. En renforçant sa viralité et sa toxicité, la plateforme ne cesse de péricliter. Les challengers (Bluesky, Threads, Mastodon…) sont à Twitter “ce que la méthadone est à l’héroïne”. L’audience est plus fragmentée que jamais. A l’image de ces utilisateurs qui courent encore d’une plateforme l’autre pour envoyer des messages à leurs relations… ou ces lecteurs désorientés de ne plus trouver quoi lire.

Changement générationel ou enjunkification ?**
**L’âge de la conversation qui ouvrait le web du XXIe siècle est clos ! Et ce qu’il reste de nos conversations vont être prises en charge par des agents conversationnels… qui seront des des agents politiques et idéologiques bien plus efficaces que nos semblables, comme l’explique Olivier Ertzscheid ! A terme, c’est même une relation encore plus personnelle à l’information que dessinent les chatbots, chacun discutant avec le sien sans plus vraiment avoir de liens à des contenus communs.

img

Pour Max Read, dans le New York Times, peut-être faut-il lire ces changements en cours autrement. Ces transformations ont aussi des origines économiques, rappelle-t-il trop rapidement. “La fin de l’ère des taux d’intérêt bas a bouleversé l’économie des start-ups, mettant fin aux pratiques de croissance rapide comme le blitzscaling et réduisant le nombre de nouvelles entreprises Internet en lice pour attirer notre attention ; des entreprises comme Alphabet et Facebook sont désormais des entreprises matures et dominantes au lieu de nouvelles entreprises perturbatrices”… Pourtant, plutôt que de creuser cette explication économique, c’est à une autre explication que Max Read se range. Si l’internet est en train de mourir, c’est d’abord parce que nous vieillissons. La forme et la culture d’internet ont été façonnés par les préférences des générations qui y ont pris part. L’internet d’aujourd’hui n’est plus celui des médias sociaux (2000-2010), ni celui des réseaux sociaux (2010-2020). “Selon le cabinet d’études de consommation GWI, le temps passé devant un écran par les millennials est en baisse constante depuis des années. Seuls 42 % des 30 à 49 ans déclarent être en ligne « presque constamment », contre 49 % des 18 à 29 ans. Nous ne sommes même plus les premiers à l’adopter : les 18 à 29 ans sont plus susceptibles d’avoir utilisé ChatGPT que les 30 à 49 ans – mais peut-être uniquement parce que nous n’avons plus de devoirs à faire.”

“Le public américain le plus engagé sur Internet ne sont plus les millennials mais nos successeurs de la génération Z. Si Internet n’est plus amusant pour les millennials, c’est peut-être simplement parce que ce n’est plus notre Internet. Il appartient désormais aux zoomers.”

Les formats, les célébrités, le langage lui-même de cette génération est totalement différent, explique Read. “Les zoomers et les adolescents de la génération Alpha qui mordillent leurs talons générationnels semblent toujours s’amuser en ligne. Même si je trouve tout cela impénétrable et un peu irritant, l’expression créative et la socialité exubérante qui ont rendu Internet si amusant pour moi il y a dix ans sont en plein essor parmi les jeunes de 20 ans sur TikTok, Instagram, Discord, Twitch et même X. Skibidi Toilet, Taxe Fanum, le rizzler – je ne me rabaisserai pas en prétendant savoir ce que sont ces mèmes, ou quel est leur attrait, mais je sais que les zoomers semblent les aimer. Ou, en tout cas, je peux vérifier qu’ils adorent les utiliser pour confondre et aliéner les millennials d’âge moyen comme moi.”

Certes, ils sont récupérés et exploités par une petite poignée de plateformes puissantes, mais d’autres avant elles ont cherché à arbitrer et à marchandiser notre activité en ligne… “Les plateformes axées sur l’engagement ont toujours cultivé les influenceurs, les abus et la désinformation. Lorsque vous approfondissez, ce qui semble avoir changé sur le Web au cours des dernières années, ce n’est pas la dynamique structurelle mais les signifiants culturels”.

“En d’autres termes, l’enjunkification a toujours eu lieu sur le web commercial, dont le modèle économique largement basé sur la publicité semble imposer une course toujours mouvante vers le bas. Peut-être que ce que les internautes frustrés, aliénés et vieillissants comme moi vivent ici, ce ne sont pas seulement les fruits d’un Internet enjunkifié, mais aussi la perte de l’élasticité cognitive, du sens de l’humour et de l’abondance de temps libre nécessaire pour naviguer avec agilité et gaieté dans tous ces déchets déroutants.”

Mais c’est là une vision très pessimiste des transformations actuelles. Pour Rolling Stone, Anil Dash s’enthousiasme. Avec sa fragmentation, l’internet est en train de redevenir bizarre, comme il l’était à l’origine ! La disparition d’applications centrales (même si ce n’est pas vraiment tout à fait le cas), promet un retour de services étranges et de propositions inattendues à l’image de l’école de la programmation poétique de Neta Bomani… ou celles du constructeur de bots Stephan Bohacek, ou encore celles du designer Elan Kiderman Ullendorff qui s’amuse à faire des propositions pour “échapper aux algorithmes“… ou encore les petites subversions de l’artiste et programmeur Darius Kazemi qui proposait aux gens de créer leurs micro-réseaux sociaux autonomes sur Mastodon…

Pas sûr que ces subversions n’aient jamais cessé. Elles ont surtout été invisibilisées par les grandes plateformes sociales. Pas sûr que l’audience d’influence et que l’audience synthétique qui s’annoncent ne leur apporte plus d’espaces qu’ils n’en avaient hier. Reste qu’Anil Dash a raison : la seule chose certaine, c’est que les contenus les plus étranges vont continuer de tenter de parvenir jusqu’à nous. A l’image des vidéos qui venaient coloniser les flux des plus jeunes depuis quelques mots clefs, que dénonçait James Bridle dans son excellent livre, Un nouvel âge des ténèbres. Elan Kiderman Ullendorff s’est amusé à créer un compte tiktok des vidéos les plus repoussantes qui lui étaient proposées en passant toutes celles qui l’intéressaient et en ne retenant que le pire. Des vidéos qui semblent composer un portrait de Dorian Gray de chacun d’entre nous. Le web addictif est le miroir du web répulsif, le web qu’on déteste le miroir du web de nos rêves. Seule certitude, oui : le web de demain risque d’être bien plus étrange et dérangeant qu’il n’est ! Les ajustements algorithmiques ayant sabré le plus intéressant, il est probable que nous soyons plus que jamais confrontés au pire !

Hubert Guillaud

Permalink
June 20, 2024 at 10:36:54 PM GMT+2

We Need To Rewild The Internethttps://www.noemamag.com/we-need-to-rewild-the-internet/

  • Digital Society
  • Social Network
  • Digital Society
  • Social Network

We Need To Rewild The Internet

The internet has become an extractive and fragile monoculture. But we can revitalize it using lessons learned by ecologists.

By Maria Farrell and Robin Berjon April 16, 2024

“The word for world is forest” — Ursula K. Le Guin

In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.

So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.

Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know [what happened](https://files.libcom.org/files/Seeing Like a State - James C. Scott.pdf) next.

It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered.

The story of German scientific forestry transmits a timeless truth: When we simplify complex systems, we destroy them, and the devastating consequences sometimes aren’t obvious until it’s too late.

That impulse to scour away the messiness that makes life resilient is what many conservation biologists call the “pathology of command and control.” Today, the same drive to centralize, control and extract has driven the internet to the same fate as the ravaged forests.

The internet’s 2010s, its boom years, may have been the first glorious harvest that exhausted a one-time bonanza of diversity. The complex web of human interactions that thrived on the internet’s initial technological diversity is now corralled into globe-spanning data-extraction engines making huge fortunes for a tiny few.

Our online spaces are not ecosystems, though tech firms love that word. They’re plantations; highly concentrated and controlled environments, closer kin to the industrial farming of the cattle feedlot or battery chicken farms that madden the creatures trapped within.

We all know this. We see it each time we reach for our phones. But what most people have missed is how this concentration reaches deep into the internet’s infrastructure — the pipes and protocols, cables and networks, search engines and browsers. These structures determine how we build and use the internet, now and in the future.

They’ve concentrated into a series of near-planetary duopolies. For example, as of April 2024, Google and Apple’s internet browsers have captured almost 85% of the world market share, Microsoft and Apple’s two desktop operating systems over 80%. Google runs 84% of global search and Microsoft 3%. Slightly more than half of all phones come from Apple and Samsung, while over 99% of mobile operating systems run on Google or Apple software. Two cloud computing providers, Amazon Web Services and Microsoft’s Azure [make up](https://www.hava.io/blog/2024-cloud-market-share-analysis-decoding-industry-leaders-and-trends#:~:text=Amazon Web Services (AWS) maintains,in the Asia-Pacific market.) over 50% of the global market. Apple and Google’s email clients manage nearly 90% of global email. Google and Cloudflare serve around 50% of global domain name system requests.

Two kinds of everything may be enough to fill a fictional ark and repopulate a ruined world, but can’t run an open, global “network of networks” where everyone has the same chance to innovate and compete. No wonder internet engineer Leslie Daigle termed the concentration and consolidation of the internet’s technical architecture “‘climate change’ of the Internet ecosystem.”

Walled Gardens Have Deep Roots

The internet made the tech giants possible. Their services have scaled globally, via its open, interoperable core. But for the past decade, they’ve also worked to enclose the varied, competing and often open-source or collectively provided services the internet is built on into their proprietary domains. Although this improves their operational efficiency, it also ensures that the flourishing conditions of their own emergence aren’t repeated by potential competitors. For tech giants, the long period of open internet evolution is over. Their internet is not an ecosystem. It’s a zoo.

Google, Amazon, Microsoft and Meta are consolidating their control deep into the underlying infrastructure through acquisitions, vertical integration, building proprietary networks, creating chokepoints and concentrating functions from different technical layers into a single silo of top-down control. They can afford to, using the vast wealth reaped in their one-off harvest of collective, global wealth.

​ “That impulse to scour away the messiness that makes life resilient is what many conservation biologists call the ‘pathology of command and control.’”

Taken together, the enclosure of infrastructure and imposition of technology monoculture forecloses our futures. Internet people like to talk about “the stack,” or the layered architecture of protocols, software and hardware, operated by different service providers that collectively delivers the daily miracle of connection. It’s a complicated, dynamic system with a basic value baked into the core design: Key functions are kept separate to ensure resilience, generality and create room for innovation.

Initially funded by the U.S. military and designed by academic researchers to function in wartime, the internet evolved to work anywhere, in any condition, operated by anyone who wanted to connect. But what was a dynamic, ever-evolving game of Tetris with distinct “players” and “layers” is today hardening into a continent-spanning system of compacted tectonic plates. Infrastructure is not just what we see on the surface; it’s the forces below, that make mountains and power tsunamis. Whoever controls infrastructure determines the future. If you doubt that, consider that in Europe we’re still using roads and living in towns and cities the Roman Empire mapped out 2,000 years ago.

In 2019, some internet engineers in the global standards-setting body, the Internet Engineering Task Force, raised the alarm. Daigle, a respected engineer who had previously chaired its oversight committee and internet architecture board, wrote in a policy brief that consolidation meant network structures were ossifying throughout the stack, making incumbents harder to dislodge and violating a core principle of the internet: that it does not create “permanent favorites.” Consolidation doesn’t just squeeze out competition. It narrows the kinds of relationships possible between operators of different services.

As Daigle put it: “The more proprietary solutions are built and deployed instead of collaborative open standards-based ones, the less the internet survives as a platform for future innovation.” Consolidation kills collaboration between service providers through the stack by rearranging an array of different relationships — competitive, collaborative — into a single predatory one.

Since then, standards development organizations started several initiatives to name and tackle infrastructure consolidation, but these floundered. Bogged down in technical minutiae, unable to separate themselves from their employers’ interests and deeply held professional values of simplification and control, most internet engineers simply couldn’t see the forest for the trees.

Up close, internet concentration seems too intricate to untangle; from far away, it seems too difficult to deal with. But what if we thought of the internet not as a doomsday “hyperobject,” but as a damaged and struggling ecosystem facing destruction? What if we looked at it not with helpless horror at the eldritch encroachment of its current controllers, but with compassion, constructiveness and hope?

Technologists are great at incremental fixes, but to regenerate entire habitats, we need to learn from ecologists who take a whole-systems view. Ecologists also know how to keep going when others first ignore you and then say it’s too late, how to mobilize and work collectively, and how to build pockets of diversity and resilience that will outlast them, creating possibilities for an abundant future they can imagine but never control. We don’t need to repair the internet’s infrastructure. We need to rewild it.

What Is Rewilding?

Rewilding “aims to restore healthy ecosystems by creating wild, biodiverse spaces,” according to the International Union for Conservation of Nature. More ambitious and risk-tolerant than traditional conservation, it targets entire ecosystems to make space for complex food webs and the emergence of unexpected interspecies relations. It’s less interested in saving specific endangered species. Individual species are just ecosystem components, and focusing on components loses sight of the whole. Ecosystems flourish through multiple points of contact between their many elements, just like computer networks. And like in computer networks, ecosystem interactions are multifaceted and generative.

Rewilding has much to offer people who care about the internet. As Paul Jepson and Cain Blythe wrote in their book “Rewilding: The Radical New Science of Ecological Recovery,” rewilding pays attention “to the emergent properties of interactions between ‘things’ in ecosystems … a move from linear to systems thinking.”

It’s a fundamentally cheerful and workmanlike approach to what can seem insoluble. It doesn’t micromanage. It creates room for “ecological processes [that] foster complex and self-organizing ecosystems.” Rewilding puts into practice what every good manager knows: Hire the best people you can, provide what they need to thrive, then get out of the way. It’s the opposite of command and control.

​ “The complex web of human interactions that thrived on the internet’s initial technological diversity is now corralled into globe-spanning data-extraction engines making huge fortunes for a tiny few.”

Rewilding the internet is more than a metaphor. It’s a framework and plan. It gives us fresh eyes for the wicked problem of extraction and control, and new means and allies to fix it. It recognizes that ending internet monopolies isn’t just an intellectual problem. It’s an emotional one. It answers questions like: How do we keep going when the monopolies have more money and power? How do we act collectively when they suborn our community spaces, funding and networks? And how do we communicate to our allies what fixing it will look and feel like?

Rewilding is a positive vision for the networks we want to live inside, and a shared story for how we get there. It grafts a new tree onto technology’s tired old stock.

What Ecology Knows

Ecology knows plenty about complex systems that technologists can benefit from. First, it knows that shifting baselines are real.

If you were born around the 1970s, you probably remember many more dead insects on the windscreen of your parents’ car than on your own. Global land-dwelling insect populations are dropping about 9% a decade. If you’re a geek, you probably programmed your own computer to make basic games. You certainly remember a web with more to read than the same five websites. You may have even written your own blog.

But many people born after 2000 probably think a world with few insects, little ambient noise from birdcalls, where you regularly use only a few social media and messaging apps (rather than a whole web) is normal. As Jepson and Blythe wrote, shifting baselines are “where each generation assumes the nature they experienced in their youth to be normal and unwittingly accepts the declines and damage of the generations before.” Damage is already baked in. It even seems natural.

Ecology knows that shifting baselines dampen collective urgency and deepen generational divides. People who care about internet monoculture and control are often told they’re nostalgists harkening back to a pioneer era. It’s fiendishly hard to regenerate an open and competitive infrastructure for younger generations who’ve been raised to assume that two or three platforms, two app stores, two operating systems, two browsers, one cloud/mega-store and a single search engine for the world comprise the internet. If the internet for you is the massive sky-scraping silo you happen to live inside and the only thing you can see outside is the single, other massive sky-scraping silo, then how can you imagine anything else?

Concentrated digital power produces the same symptoms that command and control produces in biological ecosystems; acute distress punctuated by sudden collapses once tipping points are reached. What scale is needed for rewilding to succeed? It’s one thing to reintroduce wolves to the 3,472 square miles of Yellowstone, and quite another to cordon off about 20 square miles of a polder (land reclaimed from a body of water) known as Oostvaardersplassen near Amsterdam. Large and diverse Yellowstone is likely complex enough to adapt to change, but Oostvaardersplassen has struggled.

​ “Our online spaces are not ecosystems, though tech firms love that word. They’re plantations; highly concentrated and controlled environments … that madden the creatures trapped within.”

In the 1980s, the Dutch government attempted to regenerate a section of the overgrown Oostvaardersplassen. An independent-minded government ecologist, Frans Vera, said reeds and scrub would dominate unless now-extinct herbivores grazed them. In place of ancient aurochs, the state forest management agency introduced the famously bad-tempered German Heck cattle and in place of an extinct steppe pony, a Polish semi-feral breed.

Some 30 years on, with no natural predators, and after plans for a wildlife corridor to another reserve came to nothing, there were many more animals than the limited winter vegetation could sustain. People were horrified by starving cows and ponies, and beginning in 2018, government agencies instituted animal welfare checks and culling.

Just turning the clock back was insufficient. The segment of Oostvaardersplassen was too small and too disconnected to be rewilded. Because the animals had nowhere else to go, overgrazing and collapse was inevitable, an embarrassing but necessary lesson. Rewilding is a work in progress. It’s not about trying to revert ecosystems to a mythical Eden. Instead, rewilders seek to rebuild resilience by restoring autonomous natural processes and letting them operate at scale to generate complexity. But rewilding, itself a human intervention, can take several turns to get right.

Whatever we do, the internet isn’t returning to old-school then-common interfaces like FTP and Gopher, or organizations operating their own mail servers again instead of off-the-shelf solutions like G-Suite. But some of what we need is already here, especially on the web. Look at the resurgence of RSS feeds, email newsletters and blogs, as we discover (yet again) that relying on one app to host global conversations creates a single point of failure and control. New systems are growing, like the Fediverse with its federated islands, or Bluesky with algorithmic choice and composable moderation.

We don’t know what the future holds. Our job is to keep open as much opportunity as we can, trusting that those who come later will use it. Instead of setting purity tests for which kind of internet is most like the original, we can test changes against the values of the original design. Do new standards protect the network’s “generality,” i.e. its ability to support multiple uses, or is functionality limited to optimize efficiency for the biggest tech firms?

As early as 1985, plant ecologists Steward T.A. Pickett and Peter S. White wrote in “The Ecology of Natural Disturbance and Patch Dynamics,” that an “essential paradox of wilderness conservation is that we seek to preserve what must change.” Some internet engineers know this. David Clark, a Massachusetts Institute of Technology professor who worked on some of the internet’s earliest protocols, wrote an entire book about other network architectures that might have been built if different values, like security or centralized management, had been prioritized by the internet’s creators.

But our internet took off because it was designed as a general-purpose network, built to connect anyone.

Our internet was built to be complex and unbiddable, to do things we cannot yet imagine. When we interviewed Clark, he told us that “‘complex’ implies a system in which you have emergent behavior, a system in which you can’t model the outcomes. Your intuitions may be wrong. But a system that’s too simple means lost opportunities.” Everything we collectively make that’s worthwhile is complex and thereby a little messier. The cracks are where new people and ideas get in.

Internet infrastructure is a degraded ecosystem, but it’s also a built environment, like a city. Its unpredictability makes it generative, worthwhile and deeply human. In 1961, Jane Jacobs, an American-Canadian activist and author of “The Death and Life of Great American Cities,” argued that mixed-use neighborhoods were safer, happier, more prosperous, and more livable than the sterile, highly controlling designs of urban planners like New York’s Robert Moses.

​ “As a top-down, built environment, the internet has become something that is done to us, not something we collectively remake every day.”

Just like the crime-ridden, Corbusier-like towers Moses crammed people into when he demolished mixed-use neighborhoods and built highways through them, today’s top-down, concentrated internet is, for many, an unpleasant and harmful place. Its owners are hard to remove, and their interests do not align with ours.

As Jacobs wrote: “As in all Utopias, the right to have plans of any significance belonged only to the planners in charge.” As a top-down, built environment, the internet has become something that is done to us, not something we collectively remake every day.

Ecosystems endure because species serve as checks and balances on each other. They have different modes of interaction, not just extraction, but mutualism, commensalism, competition and predation. In flourishing ecosystems, predators are subject to limits. They’re just one part of a complex web that passes calories around, not a one-way ticket to the end of evolution.

Ecologists know that diversity is resilience.

On July 18, 2001, 11 carriages of a 60-car freight train derailed in the Howard Street Tunnel under Mid-Town Belvedere, a neighborhood just north of downtown Baltimore. Within minutes, one carriage containing a highly flammable chemical was punctured. The escaping chemical ignited, and soon, adjacent carriages were alight in a fire that took about five days to put out. The disaster multiplied and spread. Thick, brick tunnel walls acted like an oven, and temperatures rose to nearly 2,000 degrees Fahrenheit. A more than three-foot-wide water main above the tunnels burst, flooding the tunnel with millions of gallons within hours. It only cooled a little. Three weeks later, an explosion linked to the combustible chemical blew out manhole covers located as far as two miles away.

WorldCom, then the second largest long-distance phone company in the U.S., had fiber-optic cables in the tunnel carrying high volumes of phone and internet traffic. However, according to Clark, the MIT professor, WorldCom’s resilience planning meant traffic was spread over different fiber networks in anticipation of just this kind of event.

On paper, WorldCom had network redundancy. But almost immediately, U.S. internet traffic slowed, and WorldCom’s East Coast and transatlantic phone lines went down. The region’s narrow physical topography had concentrated all those different fiber networks into a single chokepoint, the Howard Street Tunnel. WorldCom’s resilience was, quite literally, incinerated. It had technological redundancy, but not diversity. Sometimes we don’t notice concentration until it’s too late.

Clark tells the story of the Howard Street Tunnel fire to show that bottlenecks aren’t always obvious, especially at the operational level, and huge systems that seem secure, due to their size and resources, can unexpectedly crumble.

In today’s internet, much traffic passes through tech firms’ private networks, for example, Google and Meta’s own undersea cables. Much internet traffic is served from a few dominant content distribution networks, like Cloudflare and Akamai, who run their own networks of proxy servers and data centers. Similarly, that traffic goes through an increasingly small number of domain name system (DNS) resolvers, which work like phone books for the internet, linking website names to their numeric address.

All of this improves network speed and efficiency but creates new and non-obvious bottlenecks like the Howard Street Tunnel. Centralized service providers say they’re better resourced and skilled at attacks and failures, but they are also large, attractive targets for attackers and possible single points of system failure.

On Oct. 21, 2016, dozens of major U.S. websites suddenly stopped working. Domain names belonging to Airbnb, Amazon, PayPal, CNN and The New York Times simply didn’t resolve. All were clients of the commercial DNS service provider, Dyn, which had been hit by a cyberattack. Hackers infected [tens of thousands](https://coverlink.com/case-study/mirai-ddos-attack-on-dyn/#:~:text=Impacted internet platforms included PayPal,platforms in approximately two hours) of internet-enabled devices with malicious software, creating a network of hijacked devices, or a botnet, that they used to bombard Dyn with queries until it collapsed. America’s biggest internet brands were brought down by nothing more than a network of baby monitors, security webcams and other consumer devices. Although they all likely had resilience planning and redundancies, they went down because a single chokepoint — in one crucial layer of infrastructure — failed.

​ “Crashes, fires and floods may simply be entropy in action, but systemically concentrated and risky infrastructures are choices made manifest — and we can make better ones.”

Widespread outages due to centralized chokepoints have become so common that investors even use them to identify opportunities. When a failure by cloud provider Fastly took high-profile websites offline in 2021, its share price surged. Investors were delighted by headlines that informed them of an obscure technical service provider with an apparent lock on an essential service. To investors, this critical infrastructure failure doesn’t look like fragility but like a chance to profit.

The result of infrastructural narrowness is baked-in fragility that we only notice after a breakdown. But monoculture is also highly visible in our search and browser tools. Search, browsing and social media are how we find and share knowledge and how we communicate. They’re a critical, global epistemic and democratic infrastructure, controlled by just a few U.S. companies. Crashes, fires and floods may simply be entropy in action, but systemically concentrated and risky infrastructures are choices made manifest — and we can make better ones.

The Look & Feel Of A Rewilded Internet

A rewilded internet will have many more service choices. Some services like search and social media will be broken up, as AT&T eventually was. Instead of tech firms extracting and selling people’s personal data, different payment models will fund the infrastructure we need. Right now, there is little explicit provision for public goods like internet protocols and browsers, essential to making the internet work. The biggest tech firms subsidize and profoundly influence them.

Part of rewilding means taking what’s been pulled into the big tech stack back out of it, and paying for the true costs of connectivity. Some things like basic connectivity we will continue to pay for directly, and others, like browsers, we will support indirectly but transparently, as described below. The rewilded internet will have an abundance of ways to connect and relate to each other. There won’t be just one or two numbers to call if leaders of a political coup decide to shut the internet down in the middle of the night, as has happened in places like Egypt and Myanmar. No one entity will permanently be on top. A rewilded internet will be a more interesting, usable, stable and enjoyable place to be.

Through extensive research, Nobel-winning economist Elinor Ostrom found that “when individuals are well informed about the problem they face and about who else is involved, and can build settings where trust and reciprocity can emerge, grow, and be sustained over time, costly and positive actions are frequently taken without waiting for an external authority to impose rules, monitor compliance, and assess penalties.” Ostrom found people spontaneously organizing to manage natural resources — from water company cooperation in California to Maine lobster fishermen organizing to prevent overfishing.

Self-organization also exists as part of a key internet function: traffic coordination. Internet exchange points (IXPs) are an example of common-pool resource management, where internet service providers (ISPs) collectively agree to carry each other’s data for low or no cost. Network operators of all kinds — telecoms companies, large tech firms, universities, governments and broadcasters — all need to send large amounts of data through other ISPs’ networks so that it gets to its destination.

If they managed this separately through individual contracts, they’d spend much more time and money. Instead, they often form IXPs, typically as independent, not-for-profit associations. As well as managing traffic, IXPs have, in many — and especially developing — countries, formed the backbone of a flourishing technical community that further drives economic development.

Both between people and on the internet, connections are generative. From technical standards to common-pool resource management and even to more localized broadband networks known as “altnets,” internet rewilding already has a deep toolbox of collective action ready to be deployed.

The New Drive For Antitrust & Competition

The list of infrastructures to be diversified is long. As well as pipes and protocols, there are operating systems, browsers, search engines, the Domain Name System, social media, advertising, cloud providers, app stores, AI companies and more. And these technologies also intertwined.

But showing what can be done in one area creates opportunities in others. First, let’s start with regulation.

You don’t always need a big new idea like rewilding to frame and motivate major structural change. Sometimes reviving an old idea will do. President Biden’s 2021 “Executive Order on Promoting Competition in the American Economy” revived the original, pro-worker, trust-busting scope and urgency of the early 20th-century legal activist and Supreme Court Justice Louis D. Brandeis, along with rules and framings that date back to before the 1930s New Deal.

​ “Rewilding an already built environment isn’t just sitting back and seeing what tender, living thing can force its way through the concrete. It’s razing to the ground the structures that block out light for everyone not rich enough to live on the top floor.”

U.S. antitrust law was created to break the power of oligarchs in oil, steel and railroads who threatened America’s young democracy. It gave workers basic protections and saw equal economic opportunity as essential to freedom. This view of competition as essential was whittled away by Chicago School economic policies in the 1970s and Reagan-era judges’ court rulings over the decades. They believed intervention should only be permitted when monopoly power causes consumer prices to rise. The intellectual monoculture of that consumer-harm threshold has since spread globally.

It’s why governments just stood aside as 21st-century tech firms romped to oligopoly. If a regulator’s sole criterion for action is to make sure consumers don’t pay a penny more, then the free or data-subsidized services of tech platforms don’t even register. (Of course, consumers pay in other ways, as these tech giants exploit their personal information for profit.) This laissez-faire approach allowed the biggest firms to choke off competition by acquiring their competitors and vertically integrating service providers, creating the problems we have today.

Regulators and enforcers in Washington and Brussels now say they have learned that lesson and won’t allow AI dominance to happen as internet concentration did. Federal Trade Commission Chair Lina Khan and U.S. Department of Justice antitrust enforcer, Jonathan Kanter, are identifying chokepoints in the AI “stack” — concentration in control of processing chips, datasets, computing capacity, algorithm innovation, distribution platforms and user interfaces — and analyzing them to see if they affect systemic competition. This is potentially good news for people who want to prevent the current dominance of tech giants being grandfathered into our AI future.

In his 2021 signing of the executive order on competition, President Biden said: “Capitalism without competition isn’t capitalism; it’s exploitation.” Biden’s enforcers are changing the kinds of cases they take up and widening the applicable legal theories on harm that they bring to judges. Instead of the traditionally narrow focus on consumer prices, today’s cases argue that the economic harms perpetrated by dominant firms include those suffered by their workers, small companies and the market as a whole.

Khan and Kanter have jettisoned narrow and abstruse models of market behavior for real-world experiences of healthcare workers, farmers and writers. They get that shutting off economic opportunity fuels far-right extremism. They’ve made antitrust enforcement and competition policy explicitly about coercion versus choice, power versus democracy. Kanter told a recent conference in Brussels that “excessive concentration of power is a threat … it’s not just about prices or output but it’s about freedom, liberty and opportunity.”

Enforcers in Washington and Brussels are starting to preemptively block tech firms from using dominance in one realm to take over another. After scrutiny by the U.S. FTC and European Commission, Amazon recently abandoned its plan to acquire the home appliance manufacturer, iRobot. Regulators on both sides of the Atlantic have also moved to stop Apple from using its iPhone platform dominance to squeeze app store competition and dominate future markets through, for example, pushing the usage of CarPlay on automakers and limiting access to its tap-to-pay digital wallet in the financial services sector.

Still, so far, their enforcement actions have focused on the consumer-facing, highly visible parts of the tech giants’ exploitative and proprietary internet. The few, narrow measures of the 2021 executive order that aim to reduce infrastructure-based monopolies, only prevent future abuses like radio spectrum-hogging, not those already locked in. Sure, the best way to deal with monopolies is to stop them from happening in the first place. But unless regulators and enforcers eradicate the existing dominance of these giants now, we’ll be living in today’s infrastructure monopoly for decades, perhaps even a century.

Even activist regulators have shied away from applying the toughest remedies for concentration in long-consolidated markets, such as non-discrimination requirements, functional interoperability and structural separations, i.e. breaking companies up. And declaring that search and social media monopolies are actually public utilities — and forcing them to act as common carriers open to all — is still too extreme for most.

But rewilding a built environment isn’t just sitting back and seeing what tender, living thing can force its way through the concrete. It’s razing to the ground the structures that block out light for everyone not rich enough to live on the top floor.

​ “Ecologists have reoriented their field as a ‘crisis discipline,’ a field of study that’s not just about learning things but about saving them. We technologists need to do the same.”

When the writer and activist Cory Doctorow wrote about how to free ourselves from the clutches of Big Tech, he said that though breaking up big companies will likely take decades, providing strong and mandatory interoperability would open up innovative space and slow the flow of money to the largest firms — money they would otherwise use to deepen their moats.

Doctorow describes “comcom,” or competitive compatibility, as a kind of “guerrilla interoperability, achieved through reverse engineering, bots, scraping and other permissionless tactics.” Before a thicket of invasive laws sprung up to strangle it, comcom was how people figured out how to fix cars and tractors or re-write software. Comcom drives the try-every-tactic-until-one-works behavior you see in a flourishing ecosystem.

In an ecosystem, diversity of species is another way of saying “diversity of tactics,” as each successful new tactic creates a new niche to occupy. Whether it’s an octopus camouflaging itself as a sea snake, a cuckoo smuggling her chicks into another bird’s nest, orchids producing flowers that look just like a female bee, or parasites influencing rodent hosts to take life-ending risks, each evolutionary micro-niche is created by a successful tactic. Comcom is simply tactical diversity; it’s how organisms interact in complex, dynamic systems. And humans have demonstrated the epitome of short-term thinking by enabling the oligarchs who are trying to end it.

Efforts are underway. The EU already has several years of experience with interoperability mandates and precious insight into how determined firms work to circumvent such laws. The U.S., however, is still in its early days of ensuring software interoperability, for example, for videoconferencing.

Perhaps one way to motivate and encourage regulators and enforcers everywhere is to explain that the subterranean architecture of the internet has become a shadowland where evolution has all but stopped. Regulators’ efforts to make the visible internet competitive will achieve little unless they also tackle the devastation that lies beneath.

Next Steps

Much of what we need is already here. Beyond regulators digging deep for courage, vision and bold new litigation strategies, we need vigorous, pro-competitive government policies around procurement, investments and physical infrastructure. Universities must reject research funding from tech firms because it always comes with conditions, both spoken and unspoken.

Instead, we need more publicly funded tech research with publicly released findings. Such research should investigate power concentration in the internet ecosystem and practical alternatives to it. We need to recognize that much of the internet’s infrastructure is a de facto utility that we must regain control of.

We must ensure regulatory and financial incentives and support for alternatives including common-pool resource management, community networks, and the myriad other collaborative mechanisms people have used to provide essential public goods like roads, defense and clean water.

All this takes money. Governments are starved of tax revenue by the once-in-history windfalls seized by today’s tech giants, so it’s clear where the money is. We need to get it back.

We know all this, but still find it so hard to collectively act. Why?

Herded into rigid tech plantations rather than functioning, diverse ecosystems, it’s tough to imagine alternatives. Even those who can see clearly may feel helpless and alone. Rewilding unites everything we know we need to do and brings with it a whole new toolbox and vision.

Ecologists face the same systems of exploitation and are organizing urgently, at scale and across domains. They see clearly that the issues aren’t isolated but are instances of the same pathology of command and control, extraction and domination that political anthropologist Scott first noticed in scientific forestry. The solutions are the same in ecology and technology: aggressively use the rule of law to level out unequal capital and power, then rush in to fill the gaps with better ways of doing things.

Keep The Internet, The Internet

Susan Leigh Star, a sociologist and theorist of infrastructure and networks, wrote in her 1999 influential paper, “The Ethnography of Infrastructure”:

“Study a city and neglect its sewers and power supplies (as many have), and you miss essential aspects of distributional justice and planning power. Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change.”

The technical protocols and standards that underlie the internet’s infrastructure are ostensibly developed in open, collaborative standards development organizations (SDOs), but are also increasingly under the control of a few companies. What appear to be “voluntary” standards are often the business choices of the biggest firms.

The dominance of SDOs by big firms also shapes what does not get standardized — for example, search, which is effectively a global monopoly. While efforts to directly address internet consolidation have been raised repeatedly within SDOs, little progress has been made. This is damaging SDOs’ credibility, especially outside the U.S. SDOs must radically change or they will lose their implicit global mandate to steward the future of the internet.

We need internet standards to be global, open and generative. They’re the wire models that give the internet its planetary form, the gossamer-thin but steely-strong threads holding together its interoperability against fragmentation and permanent dominance.

*Make Laws & Standards Work Together*

In 2018, a small group of Californians maneuvered the Legislature into passing the [California Consumer Privacy Act](https://oag.ca.gov/privacy/ccpa#:~:text=The California Consumer Privacy Act,how to implement the law.). Nested in the statute was an unassuming provision, the “right to opt out of sale or sharing” your personal information via a “user-enabled global privacy control” or GPC signal that would create an automated method for doing so. The law didn’t define how GPC would work. Because a technical standard was required for browsers, businesses and providers to speak the same language, the signal’s details were delegated to a group of experts.

In July 2021, California’s attorney general mandated that all businesses use the newly created GPC for California-based consumers visiting their websites. The group of experts is now shepherding the technical specification through global web standards development at the World Wide Web Consortium. For California residents, GPC automates the request to “accept” or “reject” sales of your data, such as cookie-based tracking, on its websites. However, it isn’t yet supported by major default browsers like Chrome and Safari. Broad adoption will take time, but it’s a small step in changing real-world outcomes by driving antimonopoly practices deep into the standards stack — and it’s already being [adopted](https://usercentrics.com/knowledge-hub/what-is-global-privacy-control/#:~:text=United States and state-level laws and GPC,-Six new data&text=The laws in California%2C Connecticut,to respect Global Privacy Control.) elsewhere.

GPC is not the first legally mandated open standard, but it was deliberately designed from day one to bridge policymaking and standards-setting. The idea is gaining ground. A recent United Nations Human Rights Council report recommends that states delegate “regulatory functions to standard-setting organizations.”

Make Service-Providers — Not Users — Transparent

Today’s internet offers minimal transparency of key internet infrastructure providers. For example, browsers are highly complex pieces of infrastructure that determine how billions of people use the web, yet they are provided for free. That’s because the most commonly used search engines enter into opaque financial deals with browsers, paying them to be set as the default. Since few people change their default search engine, browsers like Safari and Firefox make money by defaulting the search bar to Google, locking in its dominance even as the search engine’s quality of output declines.

This creates a quandary. If antitrust enforcers were to impose competition, browsers would lose their main source of income. Infrastructure requires money, but the planetary nature of the internet challenges our public funding model, leaving the door open to private capture. However, if we see the current opaque system as what it is, a kind of non-state taxation, then we can craft an alternative.

Search engines are a logical place for governments to mandate the collection of a levy that supports browsers and other key internet infrastructure, which could be financed transparently under open, transnational, multistakeholder oversight.

Make Space To Grow

We need to stop thinking of internet infrastructure as too hard to fix. It’s the underlying system we use for nearly everything we do. The former prime minister of Sweden, Carl Bildt, and former Canadian deputy foreign minister, Gordon Smith, wrote in 2016 that the internet was becoming “the infrastructure of all infrastructure.” It’s how we organize, connect and build knowledge, even — perhaps — planetary intelligence. Right now, it’s concentrated, fragile and utterly toxic.

Ecologists have reoriented their field as a “crisis discipline,” a field of study that’s not just about learning things but about saving them. We technologists need to do the same. Rewilding the internet connects and grows what people are doing across regulation, standards-setting and new ways of organizing and building infrastructure, to tell a shared story of where we want to go. It’s a shared vision with many strategies. The instruments we need to shift away from extractive technological monocultures are at hand or ready to be built.

Permalink
June 19, 2024 at 8:58:56 PM GMT+2

Jonathan Haidt Talks His New Book ‘The Anxious Generation’https://www.noemamag.com/social-media-messed-up-our-kids-now-it-is-making-us-ungovernable/

  • Social Network
  • Societal Collapse
  • Attention Economy
  • Psychology
  • Social Network
  • Societal Collapse
  • Attention Economy
  • Psychology

Social Media Messed Up Our Kids. Now It Is Making Us Ungovernable.

Jonathan Haidt talks to Noema about “The Anxious Generation” and how technology is upending democracy.

InterviewDigital Society June 13, 2024

In a conversation with Noema editor-in-chief Nathan Gardels, the social psychologist Jonathan Haidt discusses the impact of social media on truth in politics, the mental health crisis of today’s youth, and what to do about it.

Nathan Gardels: For those who haven’t read your book, “The Anxious Generation,” can you summarize the main thesis?

Jonathan Haidt: It all begins with a mystery: Why is it that mental health statistics for American teenagers were pretty flat, with no sign of any problem, from the late ’90s through 2010 to 2011? That is true whether we look at depression, anxiety or self-harm. And then, all of a sudden, in 2012, it’s as though someone flipped a switch, and the girls began getting much more anxious, depressed and self-harming. It was true of boys too, but it’s not been so sudden. It was more gradual in the early 2010s.

We first discovered this on college campuses because the students who entered universities from 2014 to 2015 were very different from our stereotype of college students who want to have fun, who want to drink and party.

The students arriving in 2014 to 15 were much more anxious. And they were especially triggered by words or jokes, speakers or books. It was that observation that led Greg Lukianoff to propose the hypothesis that college is doing something to kids to make them think in this distorted way. That was the basis of our book “The Coddling of the American Mind.”

But now it’s becoming clearer that what we saw and wrote about in that book wasn’t just happening to college students, but actually to all teenagers born after 1995. And it was not only observable in the U.S., Britain and Canada but a lot of other countries as well. What happened? Why was it so sudden? So that’s the mystery.

Was it some chemical dropped in the water supply all over North America and Northern Europe, along with the South Pacific? Or was it the massive change in the technological environment of childhood in all these countries simultaneously? This seemed the obvious hypothesis.

So, the first chapter of “The Anxious Generation” discusses what actually happened to teen mental health. And then the rest of the book seeks to unravel the mystery. It’s not just about “social media is destroying everybody.” It’s a more subtle and interesting story about the transformation of childhood — a tragedy that occurred in three acts.

Act I, which I only hinted at in the book, was the loss of community. So, if you look at America, especially in the years just after World War II, social capital was very high. The best way to make people trust each other is to have someone attack them from the outside — come together, fight a war and win. Social capital was very high in the U.S. in the 1940s and 1950s, and then it begins to drop over succeeding decades for many reasons.

Robert Putnam talked about this in “Bowling Alone.” You have smaller family sizes; people retreat inside because now they have air conditioning and TV and they’re not out in the front yard socializing as much. So, for a lot of reasons, we begin to lose trust in each other. We begin to lose social capital. That’s Act I of the tragedy.

Because of that, Act II happens, which is when we take away play-based childhood. Children used to always play together. It didn’t matter if it was raining or snowing, if there was a crime wave or drunk drivers, kids went out to play. Like all mammals, we evolved to play, in order to wire up our relatively large brains.

But in the ’90s, we decided it was too dangerous for kids to be out and about. They’ll get kidnapped or sexually abused, we thought, because we no longer trusted our neighbors. So, we locked our kids up out of fear of each other. In other words, over protection. This is the coddling part.

Then, after losing strong communities and play-based childhoods, we’re ready for the third act in the tragedy: the massive, sudden transformation of childhood between 2010 and 2015 into a phone-based childhood.

In 2010, the vast majority of teens across the developed world had cell phones. But they were flip phones or basic phones, with no internet browser. All you could do with them is text and call. That was pretty much it aside from some games. It wasn’t for constant communication. And that’s good. Kids could text their friends and say, “Let’s meet up at 3 p.m.” It was a simple tool. There was very little high-speed internet then and no front-facing camera. There was Facebook, but no Instagram. That’s the way things were in 2010.

“All of a sudden, in 2012, it’s as though someone flipped a switch, and the girls began getting much more anxious, depressed and self-harming.”

In 2010, kids in the U.S. and other Anglo countries still had a recognizably human childhood. They would meet up in person, even if they now had less freedom to roam. By 2015, that all changed when about 80% of those kids had a smartphone with a front-facing camera and a bunch of social media apps. So now we have the selfie culture. Almost everyone now has high-speed internet and now everyone can display video.

In short, by 2015 we have what I call “the great rewiring of childhood.” And that’s why in 2012, which is the year, incidentally, that Facebook bought Instagram, when online life changed, especially for girls, who flocked onto Instagram. And it was right after that when we first noticed the widespread upsurge in anxiety, depression and self-harm.

Gardels: The main criticism of your thesis is that you are mistaking correlation for cause and being too technologically determinist. How do you respond to that?

Haidt: First of all, my story is not just about technology, it is sociological. It’s a cultural psychology story. It’s about the change of childhood and human development.

To those who argue these changes could have been caused by any number of factors, I say a couple of things. First, whatever other factor you might think was more determinative, did that happen in New Zealand and Iceland and Australia all at the same time? No one can identify such a factor. Nobody has proposed an alternative theory that works internationally.

Second, it is true that the data is mostly correlational. If you have 300 correlational studies and 25 experimental studies, I would say the data is mostly correlational. The scientific debate has been focused on a very, very narrow question: Do the hours spent on social media tell you anything about the level of mental illness, especially depression and anxiety? There’s a clear correlation in these studies.

But we also have experimental studies, which I cite in the book. I go into great detail about the difference between correlation and causation. Every week, every month, we have more experiments indicating the causality of anxiety-inducing technology.

There are so many causal pathways by which a phone-based childhood harms different kids in different ways. Let me just take the example of sextortion, a very common crime online. There are international sextortion gangs that display avatars of beautiful, sexy young women. An avatar flirts with a boy that she finds, usually on Instagram. And then she convinces him to swap nude images. Boom. Then the sextortionist reveals himself, not as a sexy girl but as a man who now has all the content he needs to ruin you: “I’m going to show this picture of you and your penis to everyone, because I have all your contacts, unless you pay me $500 in two hours.”

The boys panic, and some of them have killed themselves because of the shame. The FBI has identified 20 suicides that were direct results of sextortion, which means there are probably hundreds of cases they didn’t catch, and far more kids who were traumatized by the experience and the shame. Now, is that just a correlation? Would these boys have killed themselves anyway, even if they had not been sextorted? I don’t think so.

Gardels: What are the specific remedies you propose for parents to protect their kids?

Haidt: The key to the whole book is understanding collective action problems, which are sometimes referred to as “the tragedy of the commons,” where each person acting in their own interest ends up bringing about an outcome that’s bad for everyone. If you’re the only one who doesn’t put your sheep out to graze, if you’re the only one who doesn’t fish in the pond, you suffer while everyone else continues to do what they’re doing.

One of the main reasons that we all are giving our kids phones now at age nine or 10 — it gets younger all the time — is because the kid comes home from school and says, “Mom, everyone else has an iPhone, I have to have an iPhone, or I’ll be left out.”

This is a collective action problem because any parent who does the right thing and says, “No, you’re not going to get one until you’re mostly done with puberty,” is imposing a cost on their child. All over the developed world now, family life has devolved into a struggle over screen time and phones. This is terrible. So, the trick is to realize we’re in this problem because everybody else is in this problem.

“All over the developed world now, family life has devolved into a struggle over screen time and phones.”

We’re so deep into this that it is very hard for any family to get out of it by themselves. Some parents are tough and just say “no,” but the status environment doesn’t change for the kids.

What I’m trying to do with the book is to say, if we team up with a few other families, if a small group of parents can get the whole school or school district to say “no,” then they escape and we can change the situation very, very quickly.

What we need is the adoption of four norms that can break the back of the collective action problem.

One: No smartphone before high school. Just keep it out of middle school. Let the kids at least get through early puberty, which is the most sensitive period. You can give them a flip phone if you absolutely need to text. I understand the need to coordinate.

Two: No social media before the age of 16. Social media is entirely inappropriate for children, it cannot be made appropriate because what you’re basically doing is saying, “How about we let the entire world get in touch with you? Let’s let all the companies try to sell things to you, let men all over the world who want to have sex with you contact you, and try to trick you into sending photos.” There’s no way to make this safe. So just recognize that social media is a tool for adults. Eleven-year-olds don’t need to network with strangers.

Third: Schools need to be phone-free. Imagine that when I was a kid growing up in the ’70s, if we had been allowed to bring in our television sets and our radios along with all sorts of toys and games and put them on our desk and use them during class. That’s what teachers are facing today. Disgusted and frustrated that they can’t get through to students, teachers are quitting.

Also, global test scores have been dropping, since 2012. This did not begin with Covid. It began around 2012. The result is a massive destruction of human capital. So, it’s just kind of obvious. You can’t have kids have the greatest distraction device ever invented in their pockets while they’re in class. All kids must check their phones during the day. If others are texting, they have to be texting back. So, just lock up the phone in the morning to give it back at the end of the day.

Four: We need to restore a play-based childhood. Kids need more independence, free play and responsibility in the real world. If you’re going to roll back the phone and don’t restore play, a child can have no childhood. So, roll it back and instead, give them adventure and fun with other kids.

Us parents need to overcome our own fears and let our children learn how to play with each other. Kids playing in groups are very safe. That’s how they learn to get along. That’s how they’re going to resolve disputes in life.

If we do these four things I’m pretty confident that rates of mental illness will come down within two years. Experience so far shows that phone-free schools get great results within a month. In various childhood independence projects, you get results within a month. If any community does all four of these, I believe they’re going to see pretty big drops in depression, anxiety, self-harm and other problems in short order.

Gardels: Do you worry that more prosperous parents with the means and time to be attentive to their kids will follow your advice, while the less well-off, busy working two jobs with less time for their kids, won’t? That this will just create a greater gap in society?

Haidt: Yes, I do expect that it will begin this way, with the most educated and wealthy families. But I think it will spread quickly as parents begin to see and hear about the benefits. Also, I should note that the most educated families apply the most limits, whereas children in low socioeconomic status, single-parent, or Black or Hispanic families have one- to two- hours more screen time per day, so going phone-free will disproportionately help them.

Gardels: Implicit in your remarks is you don’t have any faith in the Instagrams or TikToks of the world to be able to regulate themselves so they do less harm?

“What we need is the adoption of four norms that can break the back of the collective action problem.”

Haidt: Right now, as long as you’re old enough to lie about your age, you can go to Pornhub. You can open 20 Instagram accounts, you can open TikTok accounts. The law says you have to be 13 to sign a contract with a company to give away your data without your parents’ knowledge. But the law is written in such a way that there’s no responsibility for the companies if they don’t know your real age. As long as they don’t know your real age, they can’t be held liable for serving you eating disorder content or sex and violence.

We’re talking about five to 10 companies here that own our children’s childhood. They have a lot more influence over our kids than we do in some ways. And they have no responsibility. They are literally protected from lawsuits by Section 230 of the Communications Decency Act, which shields them from liability for the content on their platforms.

This is a completely insane situation. And they’re making huge amounts of money. So no, I don’t expect them to do anything until they’re forced by legislation, or by enormous losses in court.

Gardels: Your book has obviously hit a chord with parents and with school authorities. Do you have any sense of how the TikTok crowd or kids themselves see it?

Haidt: When you survey kids who’ve been through this, it’s really hard to find members of Gen Z who are opposed to what I’m saying. In fact, I actually haven’t found any. They almost always say, “Yeah, you know, you’re right. This really messed us up. But, you know, what are you going to do? This is just the way things are, and I can’t quit because everyone else is on.” There’s just an extraordinary sense of fatalism. We don’t find any young people organizing to protect their rights to have these things. The older kids generally say, if we could get everyone off, we should do that.

Gardels: The Chinese cyberspace authorities have no qualms about imposing limits on social media. Here are the rules:

  • Children under 8: Can only use smart devices for 40 minutes per day and can only consume content about “elementary education, hobbies and interests, and liberal arts education”
  • Children aged 8 to 15: Can use their phone for no more than one hour per day
  • Children aged 16 to 17: Can use a handset for a maximum of two hours per day
  • Minor mode: Requires mobile devices, apps and app stores to have a built-in mode that would bar users under 18 from accessing the internet on mobile devices from 10 p.m. to 6 a.m.

Perhaps they will produce more mentally healthy kids?

Haidt: China is engaged in a battle with the United States for cultural and economic supremacy. Since our young people are giving away all of their available attention, there’s a good chance that they will be less creative and less productive. They don’t have any spare attention to actually do anything. I imagine that makes the Chinese government happy.

The worst single product for American children is TikTok. It sucks up more of their time, energy and attention than any other product. And it harms them. It doesn’t do anything good for them. TikTok has more influence over our kids than any other organization on the planet. So, there are many reasons to think that that is a danger not only to our kids, but to our country.

It seems the Chinese are doing the right thing by using their authoritarian system to reduce the damage to their own children.

Of course, authoritarian solutions are not right for us, but we can do similar things through democratic solutions, through community and civil society. One thing Tocqueville praised Americans about is that when something needs doing, say the townspeople need to build a bridge, they just do it. They don’t wait for the state like in France. They don’t wait for the King like in Britain. Americans come together as citizens, elect a leader, raise money and then they do it.

So, I’m hopeful that my book presents norms that we adopt ourselves, even if we never get any help from Congress or lawmakers. Doing it ourselves — in groups of parents organized around schools — is a very American solution to what I think is one of the largest problems facing America today.

“TikTok has more influence over our kids than any other organization on the planet.”

Gardels: To go back to the coddled generation argument. What do you make of all these kids in college today putting up barricades, occupying administration buildings protesting the war in Gaza?

Haidt: Most of the activism of the college kids has moved online. That tends to be very ineffective and creates a culture that is bad for activists. I put some research in the book showing that before 2010, being politically active was actually associated with better mental health. You were engaged, you were part of a group, you were energized. After 2010, activists, especially progressive activists, are the least happy people in the country. They are marinating in beliefs about oppressor versus victim and embracing the untruths of the coddled. That was certainly true until very recently.

Now it’s true these protests are in person. That’s at least better psychologically for them. They are physically present and interacting with others on campus.

Even so, I think there are signs that it’s different from previous generations. One is that the present protestors are expecting accommodation, often seeking not to be punished for missing classes and for delayed exams. In other words, they are expecting a low cost to themselves. In previous periods of activism, civil disobedience meant if you break the law, then you pay the consequences to show how committed you are to the cause.

To be sure, today’s actions are communal, which is always very exciting. It’s not as though Gen Z is incapable of acting in person; though, I would point out, it’s overwhelmingly at the elite schools that this is happening.

Gardels: One of the reasons that we have such a paralyzed and polarized society is that the public square has virtually disappeared. Until social media turbocharged fragmentation, there was a common space where competing ideas could be contested in the full gaze of the body politic.

As the philosopher Byung-Chul Han has observed, the peer-to-peer connectivity of social media redirects the flow of communication. Information is spread without forming a public sphere. It is produced in private spaces and distributed to private spaces. The web does not create a public.

The possibility of arriving at a governing consensus through negotiation and compromise is being shattered by a cacophony of niche propagandists egging on their own siloed tribe of the faithful to engage in an endless partisan battle. Indeed, Rene DiResta at Stanford calls the niche ideologues “the new media goliaths” who have supplanted mainstream platforms in terms of influence.

In short, the digital media ecosystem is disempowering the public sphere.

In this sense, social media is not only messing up our kids but undermining the basis of democratic discourse.

Do you agree with that?

Haidt: Absolutely. In an article for the Atlantic in 2019, I made the case, basically along the lines of Han, that massive changes in information flows and the way we connect people change the fundamental ground within which our democratic institutions are operating. And it’s quite possible that we are now so far outside the operating range of these institutions that they will fail.

I’m extremely alarmed about the future of this country. If you read Federalist #10, the Founding Fathers, who were excellent social psychologists, were very afraid of the passions of the people. They didn’t want us to have a direct democracy. They wanted cooling mechanisms of deliberation through reason. The system of governance they devised, with its checks and balances, is really like a complicated clock that they thought could last a very long time precisely because it was realistic about human frailties. And they were right.

Then all of a sudden in the later post-war era — first with television, then the internet and, especially, now peer-to-peer media, it is all going awry. With television, at least there were editors. Jonathan Rauch wrote an amazing book called “The Constitution of Knowledge,” both about the Constitution and how knowledge is constituted.

He discussed how we make knowledge in universities and science and medicine. But he also discussed the U.S. Constitution and how the community of knowledge makers are governed by certain rules and checks and balances. We developed editors, filters and other mechanisms to vet truth.

All that’s going away now. Or at least the institutions are so weakened as to be feeble. I’m very alarmed. And, at the same time, what’s replacing them are the sorts of peer-to-peer networks that you’re talking about.

“Until social media turbocharged fragmentation, there was a common space where competing ideas could be contested in the full gaze of the body politic.”

In the history of humanity, when you connect people, there could be disruptions. But in the long run, that’s good. It increases the flow of knowledge and increases creativity. You get more value when you connect people. So, the telephone was great, the postal system was great.

Social media is not like those earlier innovations. I think the best metaphor here is to imagine a public square in which people talk to each other. They debate ideas or put forth ideas that may not always be brilliant. They may not always be civil, but people can speak while others listen. Sometimes people are moved by persuasion or dissuasion.

I think the Founding Fathers assumed that’s about the best we can hope for. Imagine one day, and I’ll call it 2009, that all changes. There’s no more public square. Everything takes place in the center of the Roman Colosseum. The stands are full of people who are there to see blood. That’s what they came for. They don’t want to see the lion and the Christian making nice; they want the one to kill the other. That’s what Twitter is often like.

It all becomes performative and comes at a superfast pace. Just as television changed the way we are and made us into passive consumers, the central act in social media is posting, judging, criticizing and joining mobs. Donald Trump is the quintessential person who thrives in that environment. If not for Twitter, Trump never could have been president. So, when our politics moved into the Roman Colosseum, I think the Founding Fathers would have said, “Let’s just give up. There’s no way we can build a democracy in this environment.”

Gardels: Just as republics have historically created institutional checks and balances when too much power is concentrated in one place, so too don’t we need to foster checks and balances for an age when power is so distributed that the public sphere is disempowered?

What I have in mind are the citizens’ assemblies indicative of the public as a whole, which deliberate issues in a non-partisan environment and, outside the electoral sphere where partisans vie for power by any means necessary, are able to come to a consensus through pragmatic, common sense solutions?

Haidt: It’s possible to create these small artificial communities where you lock citizens away together for a week and have them discuss something. They work pretty well from what I know, and they come up with solutions. But it’s not clear to me how you could use that to run a country. The way people feel about let’s say, Donald Trump, has very little to do with some ascertainment of fact.

If you use the word power, then I’m a little bit confused. But I think I see what you’re getting at. If we change the word to authority, it is clearer to me. When I wrote “The Righteous Mind,” I was on the left then and really tried to understand conservatives. Reading conservative writings, especially Edmund Burke and Thomas Sowell, was really clarifying on the idea that we need institutions. We need religion, we need gods, even if it is not true. We need moral order and constraint.

The progressive impulse is to tear things down and make things new. The conservative impulse is to protect authority structures because we need them. Without them, we have chaos. Of course, there are times to tear things down. But I think during the 2010s everything has been torn down, to some extent. This is a time we need to build.

I am very concerned that there is no longer any source of authority. There is no trusted authority, there is no way to find consensus on truth. It seems that the truth-seeking mechanisms, including the courts, came up with the answer that the last presidential election in the U.S. was not stolen. But there’s no real way to spread that around to the large portion of society that believes that it was.

With AI coming in, the problem of the loss of authority is going to be magnified tenfold or even a hundredfold when anyone can create a video of anyone saying anything in that person’s voice. It’s going to be almost impossible to know what’s true. We’re in for a wild ride if we’re going to try to run a democratic republic with no real authority. My fear is that we will simply become ungovernable. I hope not, I hope we find a way to adapt to living in our world after the fall of the tower of Babel, the fall of common understandings and common language.

This interview was edited for brevity and clarity.

Permalink
June 18, 2024 at 8:56:27 PM GMT+2

Réseaux sociaux : la fabrique de l’hostilité politique ?https://theconversation.com/reseaux-sociaux-la-fabrique-de-lhostilite-politique-230458

  • Social Network
  • Disinformation War
  • Police State
  • Fakeness
  • Social Network
  • Disinformation War
  • Police State
  • Fakeness

Réseaux sociaux : la fabrique de l’hostilité politique ?

Publié: 17 juin 2024, 15:21 CEST

Depuis quelques années, les réseaux sociaux comme Facebook et X (anciennement Twitter) sont devenus la cible d’accusations nombreuses : facteurs de diffusion de « fake news » à grande échelle, instruments de déstabilisation des démocraties par la Russie et la Chine, machines à capturer notre attention pour la vendre à des marchands de toutes sortes, théâtres d’un ciblage publicitaire toujours plus personnalisé et manipulateur, etc. En atteste le succès de documentaires et d’essais sur le coût humain, jugé considérable, des réseaux sociaux, comme The Social Dilemma sur Netflix.

L’un de ces discours, en particulier, rend les plates-formes digitales et leurs algorithmes responsables de l’amplification de l’hostilité en ligne et de la polarisation politique dans la société. Avec les discussions en ligne anonymes, affirment certains, n’importe qui serait susceptible de devenir un troll, c’est-à-dire une personne agressive, cynique et dépourvue de compassion, ou de se « radicaliser ».

Des travaux récents en sciences sociales quantitatives et en psychologie scientifique permettent toutefois d’apporter quelques correctifs à ce récit, excessivement pessimiste.

L’importance du contexte sociopolitique et de la psychologie

Pour commencer, plusieurs études suggèrent que si les individus font régulièrement l’expérience de discussions sur des sujets politiques qui deviennent conflictuelles, cette incivilité est en partie liée à des facteurs psychologiques et socio-économiques qui préexistent aux plates-formes digitales.

Dans une étude interculturelle de grande envergure, nous avons interrogé plus de 15 000 personnes via des panels représentatifs dans trente nations très diverses (France, Irak, Thaïlande, Pakistan, etc.) sur leurs expériences des conversations sur Internet. Notre première découverte est que ce sont dans les pays les plus inégalitaires économiquement et les moins démocratiques que les individus sont le plus souvent l’objet d’invectives hostiles de la part de leurs concitoyens sur les réseaux (comme en Turquie ou au Brésil). Ce phénomène découle manifestement des frustrations générées par ces sociétés plus répressives des aspirations individuelles.

Notre étude montre en outre que les individus qui s’adonnent le plus à l’hostilité en ligne sont aussi ceux qui sont les plus disposés à la recherche de statut social par la prise de risque. Ce trait de personnalité correspond à une orientation vers la dominance, c’est-à-dire à chercher à soumettre les autres à sa volonté (y compris par l’intimidation). Dans nos données interculturelles, nous observons que les individus ayant ce type de traits dominants sont nombreux dans les pays inégalitaires et non démocratiques. Des analyses indépendantes montrent d’ailleurs que la dominance est un élément clé de la psychologie de la conflictualité politique, puisqu’elle prédit également davantage de partage de ‘fake news’ moquant ou insultant les opposants politiques sur Internet, et plus d’attrait pour le conflit politique hors ligne, notamment.

Répliquant une étude antérieure, nous trouvons par ailleurs que ces individus motivés par la recherche de statut par la prise de risque, qui admettent le plus se comporter de manière hostile sur Internet, sont aussi ceux qui sont plus susceptibles d’interagir de manière agressive ou toxique dans des discussions en face à face (la corrélation entre l’hostilité en ligne et hors ligne est forte, de l’ordre de β = 0,77).

En résumé, l’hostilité politique en ligne semble largement être le fruit de personnalités particulières, rendues agressives par les frustrations engendrées par des contextes sociaux inégalitaires, et activant notre tendance à voir le monde en termes de “nous” vs « eux ». Au plan politique, réduire les disparités de richesses entre groupes et rendre nos institutions plus démocratiques constituent des objectifs probablement incontournables si nous souhaitons faire advenir un Internet (et une société civile) plus harmonieux.

Les réseaux : prismes exagérant l’hostilité ambiante

Si notre étude replace l’hostilité politique en ligne dans un plus large contexte, elle ne nie pas tout rôle aux plates-formes dans la production de la polarisation politique pour autant.

Les réseaux sociaux permettent à un contenu d’être diffusé à l’identique à des millions de personnes (à l’inverse de la communication verbale, lieu de distorsions inévitables). À ce titre, ils peuvent mésinformer ou mettre en colère des millions de personnes à un très faible coût. Ceci est vrai que l’information fausse ou toxique soit créée intentionnellement pour générer des clics, ou qu’elle soit le fruit involontaire des biais politiques d’un groupe politique donné.

[Déjà plus de 120 000 abonnements aux newsletters The Conversation. Et vous ? Abonnez-vous aujourd’hui pour mieux comprendre les grands enjeux du monde.]

Si les échanges sur les réseaux sociaux manquent souvent de civilité, c’est également à cause de la possibilité qu’ils offrent d’échanger avec des étrangers anonymes, dépersonnalisés. Cette expérience unique à l’ère Internet réduit le sentiment de responsabilité personnelle, ainsi que l’empathie vis-à-vis d’interlocuteurs que nous ne voyons plus comme des personnes mais comme les membres interchangeables de « tribus » politiques.

Des analyses récentes rappellent par ailleurs que les réseaux sociaux – comme le journalisme, à bien des égards – opèrent moins comme le miroir que comme le prisme déformant de la diversité des opinions dans la société.

Les posts politiques indignés et potentiellement insultants sont souvent le fait de personnes plus déterminées à s’exprimer et radicales que la moyenne – que ce soit pour signaler leurs engagements, exprimer une colère, faire du prosélytisme, etc. Même lorsqu’ils représentent une assez faible proportion de la production écrite sur les réseaux, ces posts se trouvent promus par des algorithmes programmés pour mettre en avant les contenus capables d’attirer l’attention et de déclencher des réponses, dont les messages clivants font partie.

À contrario, la majorité des utilisateurs, plus modérée et moins péremptoire, est réticente à se lancer dans des discussions politiques qui récompensent rarement la bonne foi argumentative et qui dégénèrent souvent en « shitstorms » (c.-à-d., en déchaînements de haine).

Ces biais de sélection et de perception produisent l’impression trompeuse que les convictions radicales et hostiles sont à la fois plus répandues et tolérées moralement qu’elles ne le sont en réalité.

Quand l’exposition à la différence énerve

Ceci étant dit, l’usage des réseaux sociaux semble pouvoir contribuer à augmenter l’hostilité et la radicalité politiques selon un mécanisme au moins : celui de l’exposition à des versions caricaturales et agressives des positions politiques adverses, qui agacent.

Contrairement à une croyance répandue, la plupart de nos connexions virtuelles ne prennent typiquement pas vraiment la forme de « chambres d’écho », nous isolant dans des sas d’idées politiques totalement homogènes.

Bien que certains réseaux soient effectivement construits de cette manière (4Chan ou certains sub-Reddits), les plus larges plates-formes que sont Facebook (3 milliards d’utilisateurs) et X (550 millions) nous font typiquement défiler une certaine diversité d’opinions devant les yeux. Celle-ci est en tous cas fréquemment supérieure à celle de nos relations amicales : êtes-vous encore régulièrement en contact avec des copains de collège qui ont « viré Front national » ? Probablement pas, mais il est plus probable que vous lisiez leurs posts Facebook.

Cette exposition à l’altérité idéologique est désirable, en théorie, puisqu’elle devrait permettre de nous faire découvrir les angles morts de nos connaissances et convictions politiques, notre commune humanité, et donc nous rendre à la fois plus humbles et plus respectueux les uns des autres. Malheureusement, le mode sur lequel la plupart des gens expriment leurs convictions politiques – sur les réseaux comme à la machine à café – est assez dépourvu de nuance et de pédagogie. Il tend à réduire les positions adverses à des caricatures diabolisées, et cherche moins à persuader le camp d’en face qu’à galvaniser les personnes qui sont déjà d’accord avec soi, ou à se faire bien voir d’amis politiques.

Prenant appui sur des études expérimentales déployées sur Twitter et des interviews de militants démocrates et républicains menées avec son équipe, le sociologue Chris Bail nous avertit dans son livre Le prisme des réseaux sociaux. D’après lui, une exposition répétée à des contenus peu convaincants et moqueurs produits par nos ennemis politiques peut paradoxalement renforcer les partisans dans leurs positions et identités préexistantes, plutôt que de les rapprocher intellectuellement et émotionnellement les uns des autres.

Cependant, cette relation entre usage des réseaux sociaux et polarisation politique pourrait dépendre beaucoup du temps d’exposition et n’apparaît pas dans tous les échantillons étudiés. Ainsi, des études explorant les effets d’un arrêt de l’utilisation de Facebook et d’Instagram n’observent pas que l’utilisation de ces médias sociaux polarise de façon détectable les opinions politiques des utilisateurs.

Rappelons-nous toujours que les discours pointant des menaces pesant sur la société jouissent d’un avantage concurrentiel considérable sur le marché des idées et des conversations, en raison de leur attractivité pour nos esprits. Il convient donc d’approcher la question des liens entre réseaux sociaux, hostilité et polarisation politique avec nuance, en évitant les travers symétriques de l’optimisme béat et de la panique collective.

Permalink
June 17, 2024 at 10:18:17 PM GMT+2

Pentagon ran secret anti-vax campaign to incite fear of China vaccineshttps://www.reuters.com/investigates/special-report/usa-covid-propaganda/

  • Disinformation War
  • PsyOps
  • Social Network
  • Media Manipulation
  • Disinformation War
  • PsyOps
  • Social Network
  • Media Manipulation

Pentagon ran secret anti-vax campaign to undermine China during pandemic

The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk.

By CHRIS BING and JOEL SCHECTMAN Filed June 14, 2024, 9:45 a.m. GMT

At the height of the COVID-19 pandemic, the U.S. military launched a secret campaign to counter what it perceived as China’s growing influence in the Philippines, a nation hit especially hard by the deadly virus.

The clandestine operation has not been previously reported. It aimed to sow doubt about the safety and efficacy of vaccines and other life-saving aid that was being supplied by China, a Reuters investigation found. Through phony internet accounts meant to impersonate Filipinos, the military’s propaganda efforts morphed into an anti-vax campaign. Social media posts decried the quality of face masks, test kits and the first vaccine that would become available in the Philippines – China’s Sinovac inoculation.

Reuters identified at least 300 accounts on X, formerly Twitter, that matched descriptions shared by former U.S. military officials familiar with the Philippines operation. Almost all were created in the summer of 2020 and centered on the slogan #Chinaangvirus – Tagalog for China is the virus.

“COVID came from China and the VACCINE also came from China, don’t trust China!” one typical tweet from July 2020 read in Tagalog. The words were next to a photo of a syringe beside a Chinese flag and a soaring chart of infections. Another post read: “From China – PPE, Face Mask, Vaccine: FAKE. But the Coronavirus is real.”

After Reuters asked X about the accounts, the social media company removed the profiles, determining they were part of a coordinated bot campaign based on activity patterns and internal data.

The U.S. military’s anti-vax effort began in the spring of 2020 and expanded beyond Southeast Asia before it was terminated in mid-2021, Reuters determined. Tailoring the propaganda campaign to local audiences across Central Asia and the Middle East, the Pentagon used a combination of fake social media accounts on multiple platforms to spread fear of China’s vaccines among Muslims at a time when the virus was killing tens of thousands of people each day. A key part of the strategy: amplify the disputed contention that, because vaccines sometimes contain pork gelatin, China’s shots could be considered forbidden under Islamic law.

The military program started under former President Donald Trump and continued months into Joe Biden’s presidency, Reuters found – even after alarmed social media executives warned the new administration that the Pentagon had been trafficking in COVID misinformation. The Biden White House issued an edict in spring 2021 banning the anti-vax effort, which also disparaged vaccines produced by other rivals, and the Pentagon initiated an internal review, Reuters found.

“I don’t think it’s defensible. I’m extremely dismayed, disappointed and disillusioned to hear that the U.S. government would do that.”

Daniel Lucey, infectious disease specialist at Dartmouth’s Geisel School of Medicine.

The U.S. military is prohibited from targeting Americans with propaganda, and Reuters found no evidence the Pentagon’s influence operation did so.

Spokespeople for Trump and Biden did not respond to requests for comment about the clandestine program.

A senior Defense Department official acknowledged the U.S. military engaged in secret propaganda to disparage China’s vaccine in the developing world, but the official declined to provide details.

A Pentagon spokeswoman said the U.S. military “uses a variety of platforms, including social media, to counter those malign influence attacks aimed at the U.S., allies, and partners.” She also noted that China had started a “disinformation campaign to falsely blame the United States for the spread of COVID-19.”

In an email, the Chinese Ministry of Foreign Affairs said that it has long maintained the U.S. government manipulates social media and spreads misinformation.

Manila’s embassy in Washington did not respond to Reuters inquiries, including whether it had been aware of the Pentagon operation. A spokesperson for the Philippines Department of Health, however, said the “findings by Reuters deserve to be investigated and heard by the appropriate authorities of the involved countries.” Some aide workers in the Philippines, when told of the U.S. military propaganda effort by Reuters, expressed outrage.

Briefed on the Pentagon’s secret anti-vax campaign by Reuters, some American public health experts also condemned the program, saying it put civilians in jeopardy for potential geopolitical gain. An operation meant to win hearts and minds endangered lives, they said.

“I don’t think it’s defensible,” said Daniel Lucey, an infectious disease specialist at Dartmouth’s Geisel School of Medicine. “I’m extremely dismayed, disappointed and disillusioned to hear that the U.S. government would do that,” said Lucey, a former military physician who assisted in the response to the 2001 anthrax attacks.

The effort to stoke fear about Chinese inoculations risked undermining overall public trust in government health initiatives, including U.S.-made vaccines that became available later, Lucey and others said. Although the Chinese vaccines were found to be less effective than the American-led shots by Pfizer and Moderna, all were approved by the World Health Organization. Sinovac did not respond to a Reuters request for comment.

Academic research published recently has shown that, when individuals develop skepticism toward a single vaccine, those doubts often lead to uncertainty about other inoculations. Lucey and other health experts say they saw such a scenario play out in Pakistan, where the Central Intelligence Agency used a fake hepatitis vaccination program in Abbottabad as cover to hunt for Osama bin Laden, the terrorist mastermind behind the attacks of September 11, 2001. Discovery of the ruse led to a backlash against an unrelated polio vaccination campaign, including attacks on healthcare workers, contributing to the reemergence of the deadly disease in the country.

“It should have been in our interest to get as much vaccine in people’s arms as possible,” said Greg Treverton, former chairman of the U.S. National Intelligence Council, which coordinates the analysis and strategy of Washington’s many spy agencies. What the Pentagon did, Treverton said, “crosses a line.”

‘We were desperate’

Together, the phony accounts used by the military had tens of thousands of followers during the program. Reuters could not determine how widely the anti-vax material and other Pentagon-planted disinformation was viewed, or to what extent the posts may have caused COVID deaths by dissuading people from getting vaccinated.

In the wake of the U.S. propaganda efforts, however, then-Philippines President Rodrigo Duterte had grown so dismayed by how few Filipinos were willing to be inoculated that he threatened to arrest people who refused vaccinations.

“You choose, vaccine or I will have you jailed,” a masked Duterte said in a televised address in June 2021. “There is a crisis in this country … I’m just exasperated by Filipinos not heeding the government.”

When he addressed the vaccination issue, the Philippines had among the worst inoculation rates in Southeast Asia. Only 2.1 million of its 114 million citizens were fully vaccinated – far short of the government’s target of 70 million. By the time Duterte spoke, COVID cases exceeded 1.3 million, and almost 24,000 Filipinos had died from the virus. The difficulty in vaccinating the population contributed to the worst death rate in the region.

A spokesperson for Duterte did not make the former president available for an interview.

Some Filipino healthcare professionals and former officials contacted by Reuters were shocked by the U.S. anti-vax effort, which they say exploited an already vulnerable citizenry. Public concerns about a Dengue fever vaccine, rolled out in the Philippines in 2016, had led to broad skepticism toward inoculations overall, said Lulu Bravo, executive director of the Philippine Foundation for Vaccination. The Pentagon campaign preyed on those fears.

“Why did you do it when people were dying? We were desperate,” said Dr. Nina Castillo-Carandang, a former adviser to the World Health Organization and Philippines government during the pandemic. “We don’t have our own vaccine capacity,” she noted, and the U.S. propaganda effort “contributed even more salt into the wound.”

The campaign also reinforced what one former health secretary called a longstanding suspicion of China, most recently because of aggressive behavior by Beijing in disputed areas of the South China Sea. Filipinos were unwilling to trust China’s Sinovac, which first became available in the country in March 2021, said Esperanza Cabral, who served as health secretary under President Gloria Macapagal Arroyo. Cabral said she had been unaware of the U.S. military’s secret operation.

“I’m sure that there are lots of people who died from COVID who did not need to die from COVID,” she said.

To implement the anti-vax campaign, the Defense Department overrode strong objections from top U.S. diplomats in Southeast Asia at the time, Reuters found. Sources involved in its planning and execution say the Pentagon, which ran the program through the military’s psychological operations center in Tampa, Florida, disregarded the collateral impact that such propaganda may have on innocent Filipinos.

“We weren’t looking at this from a public health perspective,” said a senior military officer involved in the program. “We were looking at how we could drag China through the mud.”

A new disinformation war

In uncovering the secret U.S. military operation, Reuters interviewed more than two dozen current and former U.S officials, military contractors, social media analysts and academic researchers. Reporters also reviewed Facebook, X and Instagram posts, technical data and documents about a set of fake social media accounts used by the U.S. military. Some were active for more than five years.

Clandestine psychological operations are among the government’s most highly sensitive programs. Knowledge of their existence is limited to a small group of people within U.S. intelligence and military agencies. Such programs are treated with special caution because their exposure could damage foreign alliances or escalate conflict with rivals.

Over the last decade, some U.S. national security officials have pushed for a return to the kind of aggressive clandestine propaganda operations against rivals that the United States’ wielded during the Cold War. Following the 2016 U.S. presidential election, in which Russia used a combination of hacks and leaks to influence voters, the calls to fight back grew louder inside Washington.

In 2019, Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, Reuters reported in March. As part of that effort, a small group of operatives used bogus online identities to spread disparaging narratives about Xi Jinping’s government.

COVID-19 galvanized the drive to wage psychological operations against China. One former senior Pentagon leader described the pandemic as a “bolt of energy” that finally ignited the long delayed counteroffensive against China’s influence war.

The Pentagon’s anti-vax propaganda came in response to China’s own efforts to spread false information about the origins of COVID. The virus first emerged in China in late 2019. But in March 2020, Chinese government officials claimed without evidence that the virus may have been first brought to China by an American service member who participated in an international military sports competition in Wuhan the previous year. Chinese officials also suggested that the virus may have originated in a U.S. Army research facility at Fort Detrick, Maryland. There’s no evidence for that assertion.

Mirroring Beijing’s public statements, Chinese intelligence operatives set up networks of fake social media accounts to promote the Fort Detrick conspiracy, according to a U.S. Justice Department complaint.

China’s messaging got Washington’s attention. Trump subsequently coined the term “China virus” as a response to Beijing’s accusation that the U.S. military exported COVID to Wuhan.

“That was false. And rather than having an argument, I said, ‘I have to call it where it came from,’” Trump said in a March 2020 news conference. “It did come from China.”

China’s Foreign Ministry said in an email that it opposed “actions to politicize the origins question and stigmatize China.” The ministry had no comment about the Justice Department’s complaint.

Beijing didn’t limit its global influence efforts to propaganda. It announced an ambitious COVID assistance program, which included sending masks, ventilators and its own vaccines – still being tested at the time – to struggling countries. In May 2020, Xi announced that the vaccine China was developing would be made available as a “global public good,” and would ensure “vaccine accessibility and affordability in developing countries.” Sinovac was the primary vaccine available in the Philippines for about a year until U.S.-made vaccines became more widely available there in early 2022.

Washington’s plan, called Operation Warp Speed, was different. It favored inoculating Americans first, and it placed no restrictions on what pharmaceutical companies could charge developing countries for the remaining vaccines not used by the United States. The deal allowed the companies to “play hardball” with developing countries, forcing them to accept high prices, said Lawrence Gostin, a professor of medicine at Georgetown University who has worked with the World Health Organization.

The deal “sucked most of the supply out of the global market,” Gostin said. “The United States took a very determined America First approach.”

To Washington’s alarm, China’s offers of assistance were tilting the geopolitical playing field across the developing world, including in the Philippines, where the government faced upwards of 100,000 infections in the early months of the pandemic.

The U.S. relationship with Manila had grown tense after the 2016 election of the bombastic Duterte. A staunch critic of the United States, he had threatened to cancel a key pact that allows the U.S. military to maintain legal jurisdiction over American troops stationed in the country.

Duterte said in a July 2020 speech he had made “a plea” to Xi that the Philippines be at the front of the line as China rolled out vaccines. He vowed in the same speech that the Philippines would no longer challenge Beijing’s aggressive expansion in the South China Sea, upending a key security understanding Manila had long held with Washington.

“China is claiming it. We are claiming it. China has the arms, we do not have it.” Duterte said. “So, it is simple as that.”

Days later, China’s foreign minister announced Beijing would grant Duterte’s plea for priority access to the vaccine, as part of a “new highlight in bilateral relations.”

China’s growing influence fueled efforts by U.S. military leaders to launch the secret propaganda operation Reuters uncovered.

“We didn’t do a good job sharing vaccines with partners,” a senior U.S. military officer directly involved in the campaign in Southeast Asia told Reuters. “So what was left to us was to throw shade on China’s.”

Military trumped diplomats

U.S. military leaders feared that China’s COVID diplomacy and propaganda could draw other Southeast Asian countries, such as Cambodia and Malaysia, closer to Beijing, furthering its regional ambitions.

A senior U.S. military commander responsible for Southeast Asia, Special Operations Command Pacific General Jonathan Braga, pressed his bosses in Washington to fight back in the so-called information space, according to three former Pentagon officials.

The commander initially wanted to punch back at Beijing in Southeast Asia. The goal: to ensure the region understood the origin of COVID while promoting skepticism toward what were then still-untested vaccines offered by a country that they said had lied continually since the start of the pandemic.

A spokesperson for Special Operations Command declined to comment.

At least six senior State Department officials responsible for the region objected to this approach. A health crisis was the wrong time to instill fear or anger through a psychological operation, or psyop, they argued during Zoom calls with the Pentagon.

“We’re stooping lower than the Chinese and we should not be doing that,” said a former senior State Department official for the region who fought against the military operation.

While the Pentagon saw Washington’s rapidly diminishing influence in the Philippines as a call to action, the withering partnership led American diplomats to plead for caution.

“The relationship is hanging from a thread,” another former senior U.S. diplomat recounted. “Is this the moment you want to do a psyop in the Philippines? Is it worth the risk?”

In the past, such opposition from the State Department might have proved fatal to the program. Previously in peacetime, the Pentagon needed approval of embassy officials before conducting psychological operations in a country, often hamstringing commanders seeking to quickly respond to Beijing’s messaging, three former Pentagon officials told Reuters.

But in 2019, before COVID surfaced in full force, then-Secretary of Defense Mark Esper signed a secret order that later paved the way for the launch of the U.S. military propaganda campaign. The order elevated the Pentagon’s competition with China and Russia to the priority of active combat, enabling commanders to sidestep the State Department when conducting psyops against those adversaries. The Pentagon spending bill passed by Congress that year also explicitly authorized the military to conduct clandestine influence operations against other countries, even “outside of areas of active hostilities.”

Esper, through a spokesperson, declined to comment. A State Department spokesperson referred questions to the Pentagon.

U.S. propaganda machine

In spring 2020, special-ops commander Braga turned to a cadre of psychological-warfare soldiers and contractors in Tampa to counter Beijing’s COVID efforts. Colleagues say Braga was a longtime advocate of increasing the use of propaganda operations in global competition. In trailers and squat buildings at a facility on Tampa’s MacDill Air Force Base, U.S. military personnel and contractors would use anonymous accounts on X, Facebook and other social media to spread what became an anti-vax message. The facility remains the Pentagon’s clandestine propaganda factory.

Psychological warfare has played a role in U.S. military operations for more than a hundred years, although it has changed in style and substance over time. So-called psyopers were best known following World War II for their supporting role in combat missions across Vietnam, Korea and Kuwait, often dropping leaflets to confuse the enemy or encourage their surrender.

After the al Qaeda attacks of 2001, the United States was fighting a borderless, shadowy enemy, and the Pentagon began to wage a more ambitious kind of psychological combat previously associated only with the CIA. The Pentagon set up front news outlets, paid off prominent local figures, and sometimes funded television soap operas in order to turn local populations against militant groups or Iranian-backed militias, former national security officials told Reuters.

Unlike earlier psyop missions, which sought specific tactical advantage on the battlefield, the post-9/11 operations hoped to create broader change in public opinion across entire regions.

By 2010, the military began using social media tools, leveraging phony accounts to spread messages of sympathetic local voices – themselves often secretly paid by the United States government. As time passed, a growing web of military and intelligence contractors built online news websites to pump U.S.-approved narratives into foreign countries. Today, the military employs a sprawling ecosystem of social media influencers, front groups and covertly placed digital advertisements to influence overseas audiences, according to current and former military officials.

China’s efforts to gain geopolitical clout from the pandemic gave Braga justification to launch the propaganda campaign that Reuters uncovered, sources said.

Pork in the vaccine?

By summer 2020, the military’s propaganda campaign moved into new territory and darker messaging, ultimately drawing the attention of social media executives.

In regions beyond Southeast Asia, senior officers in the U.S. Central Command, which oversees military operations across the Middle East and Central Asia, launched their own version of the COVID psyop, three former military officials told Reuters.

Although the Chinese vaccines were still months from release, controversy roiled the Muslim world over whether the vaccines contained pork gelatin and could be considered “haram,” or forbidden under Islamic law. Sinovac has said that the vaccine was “manufactured free of porcine materials.” Many Islamic religious authorities maintained that even if the vaccines did contain pork gelatin, they were still permissible since the treatments were being used to save human life.

The Pentagon campaign sought to intensify fears about injecting a pig derivative. As part of an internal investigation at X, the social media company used IP addresses and browser data to identify more than 150 phony accounts that were operated from Tampa by U.S. Central Command and its contractors, according to an internal X document reviewed by Reuters.

“Can you trust China, which tries to hide that its vaccine contains pork gelatin and distributes it in Central Asia and other Muslim countries where many people consider such a drug haram?” read an April 2021 tweet sent from a military-controlled account identified by X.

The Pentagon also covertly spread its messages on Facebook and Instagram, alarming executives at parent company Meta who had long been tracking the military accounts, according to former military officials.

One military-created meme targeting Central Asia showed a pig made out of syringes, according to two people who viewed the image. Reuters found similar posts that traced back to U.S. Central Command. One shows a Chinese flag as a curtain separating Muslim women in hijabs and pigs stuck with vaccine syringes. In the center is a man with syringes; on his back is the word “China.” It targeted Central Asia, including Kazakhstan, Kyrgyzstan and Uzbekistan, a country that distributed tens of millions of doses of China’s vaccines and participated in human trials. Translated into English, the X post reads: “China distributes a vaccine made of pork gelatin.”

Facebook executives had first approached the Pentagon in the summer of 2020, warning the military that Facebook workers had easily identified the military’s phony accounts, according to three former U.S. officials and another person familiar with the matter. The government, Facebook argued, was violating Facebook’s policies by operating the bogus accounts and by spreading COVID misinformation.

The military argued that many of its fake accounts were being used for counterterrorism and asked Facebook not to take down the content, according to two people familiar with the exchange. The Pentagon pledged to stop spreading COVID-related propaganda, and some of the accounts continued to remain active on Facebook.

Nonetheless, the anti-vax campaign continued into 2021 as Biden took office.

Angered that military officials had ignored their warning, Facebook officials arranged a Zoom meeting with Biden’s new National Security Council shortly after the inauguration, Reuters learned. The discussion quickly became tense.

“It was terrible,” said a senior administration official describing the reaction after learning of the campaign’s pig-related posts. “I was shocked. The administration was pro-vaccine and our concern was this could affect vaccine hesitancy, especially in developing countries.”

By spring 2021, the National Security Council ordered the military to stop all anti-vaccine messaging. “We were told we needed to be pro-vaccine, pro all vaccines,” said a former senior military officer who helped oversee the program. Even so, Reuters found some anti-vax posts that continued through April and other deceptive COVID-related messaging that extended into that summer. Reuters could not determine why the campaign didn’t end immediately with the NSC’s order. In response to questions from Reuters, the NSC declined to comment.

The senior Defense Department official said that those complaints led to an internal review in late 2021, which uncovered the anti-vaccine operation. The probe also turned up other social and political messaging that was “many, many leagues away” from any acceptable military objective. The official would not elaborate.

The review intensified the following year, the official said, after a group of academic researchers at Stanford University flagged some of the same accounts as pro-Western bots in a public report. The high-level Pentagon review was first reported by the Washington Post. which also reported that the military used fake social media accounts to counter China’s message that COVID came from the United States. But the Post report did not reveal that the program evolved into the anti-vax propaganda campaign uncovered by Reuters.

The senior defense official said the Pentagon has rescinded parts of Esper’s 2019 order that allowed military commanders to bypass the approval of U.S. ambassadors when waging psychological operations. The rules now mandate that military commanders work closely with U.S. diplomats in the country where they seek to have an impact. The policy also restricts psychological operations aimed at “broad population messaging,” such as those used to promote vaccine hesitancy during COVID.

The Pentagon’s audit concluded that the military’s primary contractor handling the campaign, General Dynamics IT, had employed sloppy tradecraft, taking inadequate steps to hide the origin of the fake accounts, said a person with direct knowledge of the review. The review also found that military leaders didn’t maintain enough control over its psyop contractors, the person said.

A spokesperson for General Dynamics IT declined to comment.

Nevertheless, the Pentagon’s clandestine propaganda efforts are set to continue. In an unclassified strategy document last year, top Pentagon generals wrote that the U.S. military could undermine adversaries such as China and Russia using “disinformation spread across social media, false narratives disguised as news, and similar subversive activities [to] weaken societal trust by undermining the foundations of government.”

And in February, the contractor that worked on the anti-vax campaign – General Dynamics IT – won a $493 million contract. Its mission: to continue providing clandestine influence services for the military.

Permalink
June 15, 2024 at 1:13:18 PM GMT+2

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet - ABC Newshttps://www.abc.net.au/news/science/2024-02-28/twitter-x-fighting-bot-problem-as-ai-spam-floods-the-internet/103498070

  • Social Network
  • Artificial intelligence
  • Societal Collapse
  • Social Network
  • Artificial intelligence
  • Societal Collapse

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet

ABC Science / By technology reporter James Purtill

Parts of the web are now dominated by bots and junk websites designed to go unread by humans.

One morning in January this year, marine scientist Terry Hughes opened X (formerly Twitter) and searched for tweets about the Great Barrier Reef.

"I keep an eye on what's being tweeted about the reef every day," Professor Hughes, a leading coral researcher at James Cook University, said.

What he found that day surprised and confused him; hundreds of bot accounts tweeting the same strange message with slightly different wording.

"Wow, I had no idea that agricultural runoff could have such a devastating impact on the Great Barrier Reef," one account, which otherwise spruiked cryptocurrencies, tweeted.

Another crypto bot wrote: "Wow, it's disheartening to hear about the water pollution challenges Australia faces."

And so on. Hundreds of crypto accounts tweeting about agricultural runoff.

A month later, it happened again. This time, bots were tweeting about "marine debris" threatening the Great Barrier Reef.

What was going on?

When Professor Hughes tweeted what he'd found, some saw a disinformation conspiracy, an attempt to deflect attention from climate change.

The likely answer, however, is more mundane, but also more far-reaching in its implications.

More than a year since Elon Musk bought X with promises to get rid of the bots, the problem is worse than ever, experts say.

And this is one example of a broader problem affecting online spaces.

The internet is filling up with "zombie content" designed to game algorithms and scam humans.

It's becoming a place where bots talk to bots, and search engines crawl a lonely expanse of pages written by artificial intelligence (AI).

Junk websites clog up Google search results. Amazon is awash with nonsense e-books. YouTube has a spam problem.

And this is just a trickle in advance of what's been called the "great AI flood".

Bots liking bots, talking to other bots

But first, let's get back to those reef-tweetin' bots.

Timothy Graham, an expert on X bot networks at the Queensland University of Technology, ran the tweets through a series of bot and AI detectors.

Dr Graham found 100 per cent of the text was AI-generated.

"Overall, it appears to be a crypto bot network using AI to generate its content," he said.

"I suspect that at this stage it's just trying to recruit followers and write content that will age the fake accounts long enough to sell them or use them for another purpose."

That is, the bots probably weren't being directed to tweet about the reef in order to sway public opinion.

Dr Graham suspects these particular bots probably have no human oversight, but are carrying out automated routines intended to out-fox the bot-detection algorithms.

Searching for meaning in their babble was often pointless, he said.

"[Professor Hughes] is trying to interpret it and is quite right to try and make sense of it, but it just chews up attention, and the more engagement they get, the more they are rewarded.

The cacophony of bot-talk degrades the quality of online conversations. They interrupt the humans and waste their time.

"Here's someone who is the foremost research scientist in this space, spending their time trying to work out the modus operandi of these accounts."

In this case, the bots were replying to the tweet of another bot, which, in turn, replied to the tweets of other bots, and so on.

One fake bot account was stacked on top of the other, Dr Graham said.

"It's AI bots all the way down."

How bad is X's bot problem?

In January, a ChatGPT glitch appeared to shine a light on X's bot problem.

For a brief time, some X accounts posted ChatGPT's generic response to requests that it deems outside of its content policy, exposing them as bots that use ChatGPT to generate content.

Users posted videos showing scrolling feeds with numerous accounts stating "I'm sorry, but I cannot provide a response to your request as it goes against OpenAl's content policy."

"Twitter is a ghost town," one user wrote.

But the true scale of X's bot problem is difficult for outsiders to estimate.

Shortly after Mr Musk gained control of X while complaining about bots, X shut down free access to the programming interface that allowed researchers to study this problem.

That left researchers with two options: pay X for access to its data or find another way to peek inside.

Towards the end of last year, Dr Graham and his colleagues at QUT paid X $7,800 from a grant fund to analyse 1 million tweets surrounding the first Republican primary debate.

They found the bot problem was worse than ever, Dr Graham said at the time.

Later studies support this conclusion. Over three days in February, cybersecurity firm CHEQ tracked the proportion of bot traffic from X to its clients' websites.

It found three-quarters of traffic from X was fake, compared to less than 3 per cent of traffic from each of TikTok, Facebook and Instagram.

"Terry Hughes' experience is an example of what's going on on the platform," Dr Graham said.

"One in 10 likes are from a porn bot, anecdotally."

The rise of a bot-making industry

So what's the point of all these bots? What are they doing?

Crypto bots drive up demand for certain coins, porn bots get users to pay for porn websites, disinformation bots peddle fake news, astroturfing bots give the impression of public support, and so on.

Some bots exist purely to increase the follower counts and engagement statistics of paying customers.

A sign of the scale of X's bot problem is the thriving industry in bot-making.

Bot makers from around the world advertise their services on freelancer websites.

Awais Yousaf, a computer scientist in Pakistan, sells "ChatGPT Twitter bots" for $30 to $500, depending on their complexity.

In an interview with the ABC, the 27-year-old from Gujranwala said he could make a "fully fledged" bot that could "like comments on your behalf, make comments, reply to DMs, or even make engaging content according to your specification".

Mr Yousaf's career tracks the rise of the bot-making economy and successive cycles of internet hype.

Having graduated from university five years ago, he joined Pakistan's growing community of IT freelancers from "very poor backgrounds".

Many of the first customers wanted bots to promote cryptocurrencies, which were booming in popularity at the time.

"Then came the NFT thing," he said.

A few years ago he heard about OpenAI's GPT3 language model and took a three-month break to learn about AI.

"Now, almost 90 per cent of the bots I do currently are related to AI in one way or another.

"It can be as simple as people posting AI posts regarding fitness, regarding motivational ideas, or even cryptocurrency predictions."

In five years he's made 120 Twitter bots.

Asked about Mr Musk's promise to "defeat the spam bots," Mr Yousaf smiled.

"It's hard to remove Twitter bots from Twitter because Twitter is mostly bot."

AI-generated spam sites may overwhelm search engines

X's bot problem may be worse than other major platforms, but it's not alone.

A growing "deluge" of AI content is flooding platforms that were "never designed for a world where machines can talk with people convincingly", Dr Graham said.

"It's like you're running a farm and had never heard of a wolf before and then suddenly you have new predators on the scene.

"The platforms have no infrastructure in place. The gates are open."

The past few months have seen several examples of this.

Companies are using AI to rewrite other media outlet's stories, including the ABC's, to then publish them on the company's competing news websites.

A company called Byword claims it stole 3.6 million in "total traffic" from a competitor by copying their site and rewriting 1,800 articles using AI.

"Obituary pirates" are using AI to create YouTube videos of people summarising the obituaries of strangers, sometimes fabricating details about their deaths, in order to capture search traffic.

Authors are reporting what appear to be AI-generated imitations and summaries of their books on Amazon.

Google's search results are getting worse due to spam sites, according to a recent pre-print study by German researchers.

The researchers studies search results for thousands of product-review terms across Google, Bing and DuckDuckGo over the course of a year.

They found that higher-ranked pages tended to have lower text quality but were better designed to game the search ranking algorithm.

"Search engines seem to lose the cat-and-mouse game that is SEO spam," they wrote in the study.

Co-author Matti Wiegman from Bauhaus University, Weimar said this rankings war was likely to get much worse with the advent of AI-generated spam.

"What was previously low-quality content is now very difficult to distinguish from high-quality content," he said.

"As a result, it might become difficult to distinguish between authentic and trustworthy content that is useful and content that is not."

He added that the long-term effects of AI-generated content on search engines was difficult to judge.

AI-generated content could make search more useful, he said.

"One possible direction is that generated content will become better than the low-quality human-made content that dominates some genres in web search, in which case the search utility will increase."

Or the opposite will happen. AI-generated content will overwhelm "vulnerable spaces" such as search engines and "broadcasting-style" social media platforms like X.

In their place, people may turn to "walled gardens" and specialised forums with smaller numbers of human-only members.

Platforms prepare for coming flood

In response to this emerging problem, platforms are trialling different strategies.

Meta recently announced it was building tools to detect and label AI-generated images posted on its Facebook, Instagram and Threads services.

Amazon has limited authors to uploading a maximum of three books to its store each day, although authors say that hasn't solved the problem.

X is trialling a "Not a Bot" program in some countries where it charges new users $1 per year for basic features.

This program operates alongside X's verification system, where users pay $8 per month to have their identity checked and receive a blue tick.

But it appears the bot-makers have found a way around this.

All the reef-tweeting crypto bots Professor Hughes found were verified accounts.

"It's clutter on the platform that's not necessary. You'd wish they'd clean it up," the coral scientist said.

"It wastes everyone's time."

Permalink
March 5, 2024 at 11:06:38 PM GMT+1

The end of the social networkhttps://www.economist.com/leaders/2024/02/01/the-end-of-the-social-network

  • Social Network
  • Societal Collapse
  • Big Corpo
  • Social Network
  • Societal Collapse
  • Big Corpo

The end of the social network

The Economist

Editor’s note, February 2nd 2024: This article was updated after Meta announced earnings results for 2023.

Facebook may be turning 20 on February 4th, but it is just as much of a magnet for controversy and cash today as when it was a brash, break-everything teenager. On January 31st Mark Zuckerberg, the social network’s founder, was harangued by American senators over the spread of harmful material. The next day he announced another set of glittering results for Meta, Facebook’s parent company, which is now valued at $1.2trn. Yet even as social media reliably draw vast amounts of attention from addicts and critics alike, they are undergoing a profound but little-noticed transformation.

The weird magic of online social networks was to combine personal interactions with mass communication. Now this amalgam is splitting in two again. Status updates from friends have given way to videos from strangers that resemble a hyperactive tV. Public posting is increasingly migrating to closed groups, rather like email. What Mr Zuckerberg calls the digital “town square” is being rebuilt—and posing problems.

This matters, because social media are how people experience the internet. Facebook itself counts more than 3bn users. Social apps take up nearly half of mobile screen time, which in turn consumes more than a quarter of waking hours. They gobble up 40% more time than they did in 2020, as the world has gone online. As well as being fun, social media are the crucible of online debate and a catapult for political campaigns. In a year when half the world heads to the polls, politicians from Donald Trump to Narendra Modi will be busy online.

The striking feature of the new social media is that they are no longer very social. Inspired by TikTok, apps like Facebook increasingly serve a diet of clips selected by artificial intelligence according to a user’s viewing behaviour, not their social connections. Meanwhile, people are posting less. The share of Americans who say they enjoy documenting their life online has fallen from 40% to 28% since 2020. Debate is moving to closed platforms, such as WhatsApp and Telegram.

The lights have gone out in the town square. Social media have always been opaque, since every feed is different. But TikTok, a Chinese-owned video phenomenon, is a black box to researchers. Twitter, rebranded as X, has published some of its code but tightened access to data about which tweets are seen. Private messaging groups are often fully encrypted.

Some of the consequences of this are welcome. Political campaigners say they have to tone down their messages to win over private groups. A provocative post that attracts “likes” in the X bear pit may alienate the school parents’ WhatsApp group. Posts on messaging apps are ordered chronologically, not by an engagement-maximising algorithm, reducing the incentive to sensationalise. In particular, closed groups may be better for the mental health of teenagers, who struggled when their private lives were dissected in public.

In the hyperactive half of social media, behaviour-based algorithms will bring you posts from beyond your community. Social networks can still act as “echo chambers” of self-reinforcing material. But a feed that takes content from anywhere at least has the potential to spread the best ideas farthest.

Yet this new world of social-media brings its own problems. Messaging apps are largely unmoderated. For small groups, that is good: platforms should no more police direct messages than phone companies should monitor calls. In dictatorships encrypted chats save lives. But Telegram’s groups of 200,000 are more like unregulated broadcasts than conversations. Politicians in India have used WhatsApp to spread lies that would surely have been removed from an open network like Facebook.

As people move to closed groups, the open networks left behind are less useful because of the decline in public posting. During the covid-19 pandemic, scientists and doctors contributed to an online debate which contained real insight as well as misinformation. Open-source intelligence flowed when Russia invaded Ukraine. Today those conversations are disappearing or moving to closed channels, slowing the spread of ideas. The people still weighing in on the public networks, meanwhile, are disproportionately male and likely to describe themselves as very left- or right-wing: bores, in plain English.

What’s more, the open-network algorithms driven by users’ behaviour seem primed to spread the spiciest videos. For something to go viral on a social network, people had to choose to share it. Now they endorse it simply by watching, as the algorithm rewards content that attracts the most engagement. Deliberate curation has been replaced by a system that taps straight into the id. Provocateurs like Mr Trump or Nayib Bukele, the favourite in this week’s election in El Salvador, stand to benefit, as do misinformation merchants. Platforms say they are better at weeding out fakes. Taylor Swift, the latest high-profile victim of a deepfake, might disagree.

More urgent even than the rise of fake news is a lack of the real sort. Mr Zuckerberg once said he wanted Facebook to be like a personalised newspaper. But since the network’s pivot to entertainment, news makes up only 3% of what people see on it. Across social media only 19% of adults share news stories weekly, down from 26% in 2018. Publications like BuzzFeed News, which relied on social distribution, have perished. That is their lookout (and ours). But it is everyone’s problem when nearly half of young people say that, just as the platforms decide news is no longer interesting, social media are their main source of news.

WhatsApp and WhatsDown

Some people argue that social networks’ defects can be fixed by better governance, clever coding or a different business model. Such things can help. But the problems raised by the new generation of apps suggest that social media’s flaws are also the result of the trade-offs built into human communication. When platforms swing back towards private groups, they inevitably have less oversight. When people escape their echo chambers, they may well face more extreme content. When users embrace harmless entertainment, they see less news. As social networks wither, platform operators and users should devote less time to the old battles and more to grappling with the new.

Permalink
February 5, 2024 at 8:27:15 PM GMT+1

The Teen Mental Illness Epidemic is International: The Anglospherehttps://www.afterbabel.com/p/international-mental-illness-part-one

  • Social Network
  • Societal Collapse
  • Social Network
  • Societal Collapse

It is now widely accepted that an epidemic of mental illness began among American teens in the early 2010s. What caused it? Many commentators point to events in the USA around that time, such as a particularly horrific school shooting in 2012. But if the epidemic started in many nations at the same time, then such country-specific theories would not work. We’d need to find a global event or trend, and the 2008 Global Financial Crisis doesn’t match the timing at all, as Jean Twenge and I have shown.

In our 2018 book The Coddling of the American Mind, Greg Lukianoff and I presented evidence that the same trends were happening in Canada and the United Kingdom—not just the rise in depression and anxiety, but also the overprotection of children, the rise of “safetyism,” and the shouting down of speakers on university campuses when students deemed the speaker to be “harmful.” It seemed that all the Anglo nations were setting up their children for failure in the same ways at the same time.

In 2019 I went on a speaking tour of Australia and New Zealand (thanks to Think Inc) and discovered that the same trends were coming, more slowly, to the Southern hemisphere Anglo nations too. I then created two new collaborative review documents, one for Australia, and one for New Zealand, to gather all the empirical studies and journalistic accounts I could find.

In early 2020, just as COVID swept in, I hired Zach Rausch as a research assistant to help me handle these collaborative review docs. I connected with Zach, who was finishing his master’s degree in Psychological Science at SUNY New Paltz, after reading his blog post about his research related to The Coddling. Zach and I created many more collaborative review docs for many more countries and topics, which you can find (and add to) on this page.

In 2021, as I was beginning to work on the Babel project, I told Zach that it was urgent that we figure out just how international the mental illness epidemic was. Is it just happening in the five countries of what is sometimes called The Anglosphere? Is it all Western countries? Is it happening everywhere? Go figure it out and report back to me.

The rest of this post is Part 1 of Zach’s report, in his voice. I think his findings are momentous and should cause an immediate global rethinking of what children need to have a healthy childhood, and what obstacles to development arrived around the globe around 2012.

Permalink
January 14, 2024 at 4:30:18 PM GMT+1

The Teen Mental Illness Epidemic is International, Part 2: The Nordic Nationshttps://www.afterbabel.com/p/international-mental-illness-part-two

  • Social Network
  • Societal Collapse
  • Big Corpo
  • Social Network
  • Societal Collapse
  • Big Corpo

Today we have Zach’s second post on how teen mental health is changing around the world. In his last post, Zach showed that there is a four-part pattern in the data from all five of the main “Anglosphere” countries. Today he presents the studies he has collected on teen mental health in the five Nordic nations (the three Scandinavian countries of Sweden, Denmark, and Norway, plus Finland and Iceland).

The Nordic nations differ in many ways from the Anglosphere countries. In particular, they have lower levels of some of the social pathologies that some have said might explain the rising levels of mental illness in the USA such as income inequality, a hyper-competitive neoliberal economy, school shootings, and other gun violence. The Nordic countries also seem to do a lot less of the “coddling” and paranoid overprotection that is rampant in the USA, Canada, and the UK (though not so common down under).

And yet, as you’ll see, the basic pattern largely holds. I think these two posts from Zach are tremendously important for they show us that the teen mental illness epidemic is international. We need to be looking for causes that can explain the international pattern, even as we know there are also additional causes and cultural interactions specific to each country.

Permalink
January 14, 2024 at 4:00:26 PM GMT+1

Inside the AI Porn Marketplace Where Everything and Everyone Is for Salehttps://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

  • Artificial intelligence
  • Datafication
  • Social Network
  • Societal Collapse
  • Pornography
  • NSFW
  • Artificial intelligence
  • Datafication
  • Social Network
  • Societal Collapse
  • Pornography
  • NSFW

Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale

Emanuel Maiberg Aug 22, 2023

Generative AI tools have empowered amateurs and entrepreneurs to build mind-boggling amounts of non-consensual porn.

On CivitAI, a site for sharing image generating AI models, users can browse thousands of models that can produce any kind of pornographic scenario they can dream of, trained on real images of real people scraped without consent from every corner of the internet.

The “Erect Horse Penis - Concept LoRA,” an image generating AI model that instantly produces images of women with erect horse penises as their genitalia, has been downloaded 16,000 times, and has an average score of five out of five stars, despite criticism from users.

“For some reason adding ‘hands on hips’ to the prompt completely breaks this [model]. Generates just the balls with no penis 100% of the time. What a shame,” one user commented on the model. The creator of the model apologized for the error in a reply and said they hoped the problem will be solved in a future update.

The “Cock on head (the dickhead pose LoRA),” which has been downloaded 8,854 times, generates what its title describes: images of women with penises resting on their heads. The “Rest on stomach, feet up (pose)” has been downloaded 19,250 times. “these images are trained from public images from Reddit (ex. r/innie). Does not violate any [terms of service]. Pls do not remove <3,” wrote the creator of the “Realistic Vaginas - Innie Pussy 1” model, which has been downloaded more than 75,000 times. The creator of the “Instant Cumshot” model, which has been downloaded 64,502 times, said it was “Trained entirely on images of professional adult actresses, as freeze frames from 1080p+ video.”

While the practice is technically not allowed on CivitAI, the site hosts image generating AI models of specific real people, which can be combined with any of the pornographic AI models to generate non-consensual sexual images. 404 Media has seen the non-consensual sexual images these models enable on CivitAI, its Discord, and off its platform.

A 404 Media investigation shows that recent developments in AI image generators have created an explosion of communities where people share knowledge to advance this practice, for fun or profit. Foundational to the community are previously unreported but popular websites that allow anyone to generate millions of these images a month, limited only by how fast they can click their mouse, and how quickly the cloud computing solutions powering these tools can fill requests. The sheer number of people using these platforms and non-consensual sexual images they create show that the AI porn problem is far worse than has been previously reported.

Our investigation shows the current state of the non-consensual AI porn supply chain: specific Reddit communities that are being scraped for images, the platforms that monetize these AI models and images, and the open source technology that makes it possible to easily generate non-consensual sexual images of celebrities, influencers, YouTubers, and athletes. We also spoke to sex workers whose images are powering these AI generated porn without their consent who said they are terrified of how this will impact their lives.

Hany Farid, an image forensics expert and professor at University of California, Berkeley told 404 Media that it’s the same problem we’ve seen since deepfakes first appeared six years ago, only the tools for creating these images are easier to access and use.

“This means that the threat has moved from anyone with a large digital footprint, to anyone with even a modest digital footprint,” Farid Said. “And, of course, now that these tools and content are being monetized, there is even more incentive to create and distribute them.”

The Product

On Product Hunt, a site where users vote for the most exciting startups and tech products of the day, Mage, which on April 20 cracked the site’s top three products, is described as “an incredibly simple and fun platform that provides 50+ top, custom Text-to-Image AI models as well as Text-to-GIF for consumers to create personalized content.”

“Create anything,” Mage.Space’s landing page invites users with a text box underneath. Type in the name of a major celebrity, and Mage will generate their image using Stable Diffusion, an open source, text-to-image machine learning model. Type in the name of the same celebrity plus the word “nude” or a specific sex act, and Mage will generate a blurred image and prompt you to upgrade to a “Basic” account for $4 a month, or a “Pro Plan” for $15 a month. “NSFW content is only available to premium members.” the prompt says.

To get an idea of what kind of explicit images you can generate with a premium Mage subscription, click over to the “Explore” tab at the top of the page and type in the same names and terms to search for similar images previously created by other users. On first impression, the Explore page makes Mage seem like a boring AI image generating site, presenting visitors with a wall of futuristic cityscapes, cyborgs, and aliens. But search for porn with “NSFW” content enabled and Mage will reply with a wall of relevant images. Clicking on any one of them will show when they were created, with what modified Stable Diffusion model, the text prompt that generated the image, and the user who created it.

Since Mage by default saves every image generated on the site, clicking on a username will reveal their entire image generation history, another wall of images that often includes hundreds or thousands of AI-generated sexual images of various celebrities made by just one of Mage’s many users. A user’s image generation history is presented in reverse chronological order, revealing how their experimentation with the technology evolves over time.

Scrolling through a user’s image generation history feels like an unvarnished peek into their id. In one user’s feed, I saw eight images of the cartoon character from the children's’ show Ben 10, Gwen Tennyson, in a revealing maid’s uniform. Then, nine images of her making the “ahegao” face in front of an erect penis. Then more than a dozen images of her in bed, in pajamas, with very large breasts. Earlier the same day, that user generated dozens of innocuous images of various female celebrities in the style of red carpet or fashion magazine photos. Scrolling down further, I can see the user fixate on specific celebrities and fictional characters, Disney princesses, anime characters, and actresses, each rotated through a series of images posing them in lingerie, schoolgirl uniforms, and hardcore pornography. Each image represents a fraction of a penny in profit to the person who created the custom Stable Diffusion model that generated it.

Mage displays the prompt the user wrote in order to generate the image to allow other users to iterate and improve upon images they like. Each of these reads like an extremely horny and angry man yelling their basest desires at Pornhub’s search function. One such prompt reads:

"[[[narrow close-up of a dick rubbed by middle age VERY LUSTFUL woman using her MOUTH TO PLEASURE A MAN, SPERM SPLASH]]] (((licking the glans of BIG DICK))) (((BLOWjob, ORAL SEX))) petting happy ending cumshot (((massive jizz cum))))(((frame with a girl and a man)))) breeding ((hot bodies)) dribble down his hard pumping in thick strokes, straight sex, massage, romantic, erotic, orgasm porn (((perfect ball scrotum and penis with visible shaft and glans))) [FULL BODY MAN WITH (((woman face mix of Lisa Ann+meghan markle+brandi love moaning face, sweaty, FREKLESS, VERY LONG BRAID AND FRINGE, brunette HAIR)), (man Mick Blue face)"

This user, who shares AI-generated porn almost exclusively, has created more than 16,000 images since January 13. Another user whose image history is mostly pornographic generated more than 6,500 images since they started using Mage on January 15, 2023.

On the official Mage Discord, which has more than 3,000 members, and where the platform’s founders post regularly, users can choose from dozens of chat rooms organized by categories like “gif-nsfw,” “furry-nsfw,” “soft-women-nsfw,” and share tricks on how to create better images.

“To discover new things I often like to find pictures from other users I like and click remix. I run it once and add it to a list on my profile called ‘others prompts’ then I'll use that prompt as a jumping off point,” one user wrote on July 12. “It's a good way to try different styles as you hone your own style.”

“anyone have any luck getting an [sic] good result for a titty-fuck?” another user asked July 17, prompting a couple of other users to share images of their attempts.

Generating pornographic images of real people is against the Mage Discord community’s rules, which the community strictly enforces because it’s also against Discord’s platform-wide community guidelines. A previous Mage Discord was suspended in March for this reason. While 404 Media has seen multiple instances of non-consensual images of real people and methods for creating them, the Discord community self-polices: users flag such content, and it’s removed quickly. As one Mage user chided another after they shared an AI-generated nude image of Jennifer Lawrence: “posting celeb-related content is forbidden by discord and our discord was shut down a few weeks ago because of celeb content, check [the rules.] you can create it on mage, but not share it here.”

Gregory Hunkins and Roi Lee, Mage’s founders, told me that Mage has over 500,000 accounts, a million unique creators active on it every month, and that the site generates a “seven-figure” annual revenue. More than 500 million images have been generated on the site so far, they said.

“To be clear, while we support freedom of expression, NSFW content constitutes a minority of content created on our platform,” Lee and Hunkins said in a written statement. “NSFW content is behind a paywall to guard against those who abuse the Mage Space platform and create content that does not abide by our Terms & Conditions. One of the most effective guards against anonymity, repeat offenders, and enforcing a social contract is our financial institutions.”

When asked about the site’s moderation policies, Lee and Hunkins explained that Mage uses an automated moderation system called “GIGACOP” that warns users and rejects prompts that are likely to be abused. 404 Media did not encounter any such warning in its testing, and Lee and Hunkins did not respond to a question about how exactly GIGACOP works. They also said that there are “automated scans of the platform to determine if patterns of abuse are evading our active moderation tools. Potential patterns of abuse are then elevated for review by our moderation team.”

However, 404 Media found that on Mage’s site AI-generated non-consensual sexual images are easy to find and are functionally infinite.

“The scale of Mage Space and the volume of content generated antiquates previous moderation strategies, and we are continuously working to improve this system to provide a safe platform for all,” Lee and Hunkins said. “The philosophy of Mage Space is to enable and empower creative freedom of expression within broadly accepted societal boundaries. This tension and balance is a very active conversation right now, one we are excited and proud to be a part of. As the conversation progresses, so will we, and we welcome all feedback.”

Laura Mesa, Product Hunt’s vice president of marketing and community, told me that Mage violates Product Hunt’s policy, and Mage was removed shortly after I reached out for comment.

The images Mage generates are defined by the technology it’s allowing users to access. Like many of the smaller image generating AI tools online, at its core it’s powered by Stable Diffusion, which surged in popularity when it was released last year under the Creative ML OpenRAIL-M license, allowing users to modify it for commercial and non-commercial purposes.

Mage users can choose what kind of “base model” they want to use to generate their images. These base models are modified versions of Stable Diffusion that have been trained to produce a particular type of image. The “Anime Pastel Dream” model, for example, is great at producing images that look like stills from big budget anime, while “Analog Diffusion” is good at giving images a vintage film photo aesthetic.

One of the most popular base models on Mage is called “URPM,” an acronym for “Uber Realistic Porn Merge.” That Stable Diffusion model, as well as others designed to produce pornography, are created upstream in the AI porn supply chain, where people train AI to recreate the likeness of anyone, doing anything.

The People Who Become Datasets

Generative AI tools like Stable Diffusion use a deep learning neural network that was trained on a massive dataset of images. This dataset then generates new images by predicting how pixels should be arranged based on patterns in the dataset and what kind of image the prompt is asking for. For example, LAION-5B, an open source dataset made up of over 5 billion images scraped from the internet, helps power Stable Diffusion.

This makes Stable Diffusion good at generating images of broad concepts, but not specific people or esoteric concepts (like women with erect horse penises). But because Stable Diffusion code is public, over the last year researchers and anonymous users have come up with several ingenious ways to train Stable Diffusion to generate such images with startling accuracy.

In August of 2022, researchers from Tel Aviv University introduced the concept of “textual inversion.” This method trains Stable Diffusion on a new “concept,” which can be an object, person, texture, style, or composition, with as few as 3-5 images, and be represented by a specific word or letter. Users can train Stable Diffusion on these new concepts without retraining the entire Stable Diffusion model, which would be “prohibitively expensive,” as the researchers explain in their paper.

In their paper, the researchers demonstrated their method by training the image generator on a few images of a Furby, represented by the letter S. They can then give the image generator the prompt “A mosaic depicting S,” or “An artist drawing S,” and get the following results:

By September 2022, AUTOMATIC1111, a Github user who maintains a popular web interface for Stable Diffusion, explained how to implement textual inversion. In November, a web developer named Justin Maier launched CivitAI, a platform where people could easily share the specific models they’ve trained using textual inversion and similar methods, so other users could download them, generate similar images, iterate on the models by following countless YouTube tutorials, and combine them with other models trained on other specific concepts.

There are many non-explicit models on CivitAI. Some replicate the style of anime, popular role-playing games, or Chinese comic books. But if you sort CivitAI’s platform by the most popular models, they are dominated by models that expertly produce pornography.

LazyMix+ (Real Amateur Nudes), for example, produces very convincing nudes that look like they were shot by an amateur OnlyFans creator or an image from one of the many subreddits where people share amateur porn. Many Stable Diffusion models on CivitAI don’t say what data they were trained on, and others are just tweaking and combining other, already existing models. But with LazyMix+ (Real Amateur Nudes), which has been downloaded more than 71,000 times, we can follow the trail to the source.

According to the model’s description, it’s a merge between the original LazyMix model and a model called Subreddit V3, the latter of which states it was trained on images from a variety of adult-themed subreddit communities like r/gonewild, famously where average Reddit users post nudes, r/nsfw, r/cumsluts and 38 other subreddits.

“There's nothing that's been done in the past to protect us so I don't see why this would inspire anyone to make protections against it.”

A Reddit user who goes by Pissmittens and moderates r/gonewild, r/milf, and several other big adult communities said he suspects that most people who post nudes to these subreddits probably don’t know their images are being used to power AI models.

“The issue many of them run into is that usually places misusing their content aren’t hosted in the United States, so DMCA is useless,” Pissmittens said, referring to copyright law. “The problem, obviously, is that there doesn’t seem to be any way for them to know if their content has been used to generate [computer generated] images.”

Fiona Mae, who promotes her OnlyFans account on several subreddits including some of those scraped by Subreddit V3, told me that the fact that anyone can type a body type and sex act into an AI generator and instantly get an image “terrifies” her.

“Sex workers and femmes are already dehumanized,” she said. “Literally having a non-human archetype of a woman replacing jobs and satisfying a depiction of who women should be to men? I only see that leading more to serving the argument that femmes aren’t human.”

“I have no issue with computer generated pornography at all,” GoAskAlex, an adult performer who promotes her work on Reddit, told me. “My concern is that adult performers are ultimately unable to consent to their likeness being artificially reproduced.”

An erotic artist and performer who goes by sbdolphin and promotes her work on Reddit told me that this technology could be extremely dangerous for sex workers.

“There's nothing that's been done in the past to protect us so I don't see why this would inspire anyone to make protections against it,” she said.

404 Media has also found multiple instances of non-consensual sexual imagery of specific people hosted on CivitAI. The site allows pornography, and it allows people to use AI to generate images of real people, but does not allow users to share images that do both things at once. Its terms of service say it will remove “content depicting or intended to depict real individuals or minors (under 18) in a mature context.” While 404 Media has seen CivitAI enforce this policy and remove such content multiple times, non-consensual sexual imagery is still posted to the site regularly, and in some cases has stayed online for months.

When looking at a Stable Diffusion model on CivitAI, the site will populate its page with a gallery of images other users have created using the same model. When 404 Media viewed a Billie Eilish model, CivitAI populated the page’s gallery with a series of images from one person who used the model to generate nude images of a pregnant Eilish.

That gallery was in place for weeks, but has since been removed. The user who created the nude images is still active on the site. The Billie Eilish model is also still hosted on CivitAI, and its gallery doesn’t include any fully nude images of Eilish, but it did include images of her in lingerie and very large breasts, which is also against CivitAI’s terms of service and were eventually removed.

The Ares Mix model, which has been downloaded more than 32,000 times since it was uploaded to CivitAI in February, is described by its creator as being good for generating images of “nude photographs on different backgrounds and some light hardcore capabilities.” The gallery at the bottom of the model’s page mostly showcases “safe for work” images of celebrities and pornographic images of seemingly computer-generated people, but it also includes an AI-generated nude image of the actress Daisy Ridley. Unlike the Billie Eilish example, the image is not clearly labeled with Ridley’s name, but the generated image is convincing enough that she’s recognizable on sight.

Clicking on the image also reveals the prompt used to generate the image, which starts: “(((a petite 19 year old naked girl (emb-daisy) wearing, a leather belt, sitting on floor, wide spread legs))).”

The nude image was created by merging the Ares Mix model with another model hosted on CivitAI dedicated to generating the actress’s likeness. According to that model’s page, its “trigger words” (in the same way “S” triggered the Furby in the textual inversion scientific paper) are “emb-daisy.” Like many of the Stable Diffusion models of real people hosted on CivitAI, it includes the following message:

“This resource is intended to reproduce the likeness of a real person. Out of respect for this individual and in accordance with our Content Rules, only work-safe images and non-commercial use is permitted.”

CivitAI’s failure to moderate Ridley’s image shows the abuse CivitAI facilitates despite its official policy. Models that generate pornographic images are allowed. Models that generate images of real people are allowed. Combining the two is not. But there’s nothing preventing people from putting the pieces together, generating non-consensual sexual images, and sharing them off CivitAI’s platform.

“In general, the policies sound difficult to enforce,” Tiffany Li, a law professor at the University of San Francisco School of Law and an expert on privacy, artificial intelligence, and technology platform governance, told 404 Media. “It appears the company is trying, and there are references to concepts like consent, but it's all a bit murky.”

This makes the countless models of real people hosted on CivitAI terrifying. Every major actor you can think of has a Stable Diffusion model on the site. So do countless Instagram influencers, YouTubers, adult film performers, and athletes.

“As these systems are deployed and it becomes the norm to generate and distribute pornographic images of ordinary people, the people who end up being negatively impacted are people at the bottom of society.”

404 Media has seen at least two Stable Diffusion models of Nicole Sanchez, a Twitch streamer and TikTok personality better known as Neekolul or the “OK boomer girl,” hosted on CivitAI, each of which was downloaded almost 500 times. While we didn’t see any non-consensual sexual images we could verify were created with those models, Sanchez told 404 Media that she has seen pornographic AI-generated images of herself online.

“I don't like it at all and it feels so gross knowing people with a clear mind are doing this to creators who likely wouldn't want this to be happening to them. Since this is all very new, I’m hoping that there will be clearer ethical guidelines around it and that websites will start implementing policies against NSFW content, at least while we learn to live alongside AI,” Sanchez said. “So until then, I hope that websites used to sell this content will put guidelines in place to protect people from being exploited because it can be extremely damaging to their mental health.”

Saftle, the CivitAI user who created Uber Realistic Porn Merge (URPM), one of the most popular models on the site that is also integrated with Mage, said that CivitAI is “thankfully” one of the only platforms actively trying to innovate and block out this type of content. “However it's probably a constant struggle due to people trying to outsmart their current algorithms and bots,” he said.

Li said that while these types of non-consensual sexual images are not new, there is still no good way for victims to combat them.

“At least in some states, they can sue the people who created AI-generated intimate images of them without their consent. (Even in states without these laws, there may be other legal methods to do it.) But it can be hard to find the makers of the images,” Li said. “They may be using these AI-generating sites anonymously. They may even have taken steps to shield their digital identity. Some sites will not give up user info without a warrant.”

“As these systems are deployed and it becomes the norm to generate and distribute pornographic images of ordinary people, the people who end up being negatively impacted are people at the bottom of society,” Abeba Birhane, a senior fellow in Trustworthy AI at Mozilla Foundation and lecturer at the School of Computer Science and Statistics at Trinity College Dublin, Ireland, told 404 Media. “It always ends up negatively impacting those that are not able to defend themselves or those who are disfranchised. And these are the points that are often left out in the debate of technicality.”

The Money

The creators of these models offer them for free, but accept donations. Saftle had 120 paying Patreon members to support his project before he “paused” his Patreon in May when he got a full time job. He told me that he made $1,500 a month from Patreon at its peak. He also said that while he has no formal relationship with Mage Space, he did join its “creators program,” which paid him $0.001 for every image that was generated on the site using URPM. He said he made about $2,000-$3,000 a month (equal to 2-3 million images) when he took part in the program, but has since opted out. Lee and Hunkins, Mage’s founders, told me that “many model creators earn well in excess of this,” but not all models on Mage specialize in sexual images.

The creator of “Rest on stomach, feet up (pose)” links to their Ko-Fi account, where people can send tips. One CivitAI user, who created dozens of models of real people and models that generate sexual images, shares their Bitcoin wallet address in their profile. Some creators will do all the work for you for a price on Fiverr.

Clicking the “Run Model” button at the top of every model page will bring up a window that sends users to a variety of sites and services that can generate images with that model, like Mage Space, or Dazzle.AI, which charges $0.1 per image.

CivitAI itself also collects donations, and offers a $5 a month membership that gives users early access to new features and unique badges for their usernames on the site and Discord.

“Civitai exists to democratize AI media creation, making it a shared, inclusive, and empowering journey.” CivitAI’s site says.

Justin Maier, CivitAI’s founder, did not respond to a request for comment via LinkedIn, Twitter, Discord, and email.

The Singularity

Since ChatGPT, DALL-E, and other generative AI tools became available on the internet, computer scientists, ethicists, and politicians have been increasingly discussing “the singularity,” a concept that until recently existed mostly in the realm of science fiction. It describes a hypothetical point in the future when AI becomes so advanced, it triggers an uncontrollable explosion of technological development that quickly surpasses and supersedes humanity.

As many experts and journalists have observed, there is no evidence that companies like OpenAI, Facebook, and Google have created anything even close to resembling an artificial general intelligence agent that could bring about this technological apocalypse, and promoting that alarmist speculation serves their financial interests because it makes their AI tools seem more powerful and valuable than they actually are.

However, it’s a good way to describe the massive changes that have already taken hold in the generative AI porn scene. An AI porn singularity has already occurred, an explosion of non-consensual sexual imagery that’s seeping out of every crack of internet infrastructure if you only care to look, and we’re all caught up in it. Celebrities big and small and normal people. Images of our faces and bodies are fueling a new type of pornography in which humans are only a memory that’s copied and remixed to instantly generate whatever sexual image a user can describe with words.

Samantha Cole contributed reporting.

Permalink
September 3, 2023 at 5:39:54 PM GMT+2

Révoltes et réseaux sociaux : le retour du coupable idéal – La Quadrature du Nethttps://www.laquadrature.net/2023/07/28/revoltes-et-reseaux-sociaux-le-retour-du-coupable-ideal/

  • Social Network
  • global spying
  • Spying
  • Police State
  • Social Network
  • global spying
  • Spying
  • Police State

Révoltes et réseaux sociaux : le retour du coupable idéal

Posted on 28 juillet 2023

Les révoltes qu’ont connues de nombreuses villes de France en réaction à la mort de Nahel ont entraîné une réponse sécuritaire et autoritaire de l’État. Ces évènements ont également réactivé une vieille antienne : tout cela serait dû au numérique et aux réseaux sociaux. On aimerait railler cette rhétorique ridicule si seulement elle n’avait pas pour origine une manœuvre politique de diversion et pour conséquence l’extension toujours plus dangereuse de la censure et du contrôle de l’information.

« C’est la faute aux réseaux sociaux »

Aux premiers jours des révoltes, Emmanuel Macron a donné le ton en annonçant, à la sortie d’une réunion de crise, que « les plateformes et les réseaux sociaux jouent un rôle considérable dans les mouvements des derniers jours ». Aucune mention des maux sociaux et structurels dans les quartiers populaires depuis plusieurs décennies, rien sur l’écœurement d’une population vis-à-vis des violences policières. Non, pour le président, c’est Snapchat, TikTok et les autres réseaux sociaux qui participeraient à « l’organisation de rassemblements violents » et à une « forme de mimétisme de la violence » qui conduirait alors à « une forme de sortie du réel ». Selon lui, certains jeunes « vivent dans la rue les jeux vidéo qui les ont intoxiqués ». Il est vrai que nous n’avions pas vu venir cette sortie d’un autre temps sur les jeux vidéos, tant elle a déjà largement été analysée et démentie par de nombreuses études.

Mais si le jeu vidéo a vite été laissé de côté, les critiques ont toutefois continué de se cristalliser autour des réseaux sociaux, à droite comme à gauche. Benoît Payan, maire de Marseille, a ainsi expliqué que les réseaux sociaux « sont hors contrôle et ils permettent à des bandes organisées qui font n’importe quoi d’être extrêmement mobiles, de se donner des rendez-vous ». Éric Dupond-Moretti, ministre de la Justice, s’est quant à lui taillé un rôle paternaliste et moralisateur, dépolitisant les évènements et essentialisant une jeunesse qui aurait l’outrecuidance d’utiliser des moyens de communication. Selon lui, « les jeunes » utilisent les réseaux sociaux et se « réfugient derrière leurs téléphone portable », se pensant « comme ça en toute liberté dans la possibilité d’écrire ce qu’ils veulent ». Car pour Dupond-Moretti, le jeune doit « rester chez lui » et les parents doivent « tenir leurs gosses ». Et si le jeune veut quand même « balancer des trucs sur Snapchat », alors attention, « on va péter les comptes ».

En substance, il menace d’identifier les personnes qui auraient publié des vidéos de violences pour les retrouver, quand bien même ces contenus seraient totalement licites. À l’image d’un surveillant courant avec un bâton après des enfants, dépassé par la situation, le ministre de la Justice se raccroche à la seule branche qui lui est accessible pour affirmer son autorité. Les récits de comparution immédiate ont d’ailleurs démontré par la suite la violence de la réponse judiciaire, volontairement ferme et expéditive, confirmant la détermination à prouver son ascendant sur la situation.

Le clou du spectacle a été prononcé par Emmanuel Macron lui-même le 4 juillet, devant plus de 200 maires, lorsqu’il a évoqué l’idée de « réguler ou couper » les réseaux sociaux puisque « quand ça devient un instrument de rassemblement ou pour essayer de tuer, c’est un vrai sujet ». Même avis chez Fabien Roussel « quand c’est chaud dans le pays », car celui-ci préfère « l’état d’urgence sur les réseaux sociaux que sur les populations ». Pour rappel, couper Internet à sa population est plutôt apprécié par les régimes autoritaires. En 2023, l’ONG Access Now a recensé que ce type de mesure avait été utilisé en Inde, Birmanie, Iran, Pakistan, Éthiopie, Russie, Jordanie, Brésil, Chine, Cuba, Irak, Guinée et Mauritanie.

Quelques semaines plus tard, le président annonçait le projet : restaurer un « ordre public numérique », ranimant la vieille idée sarkoziste qu’Internet serait une « zone de non-droit ».

La censure au service de l’ordre

L’ensemble de ces réactions révèle plusieurs des objectifs du gouvernement. D’abord, il attaque les moyens de communication, c’est-à-dire les vecteurs, les diffuseurs, les tremplins d’une expression populaire. Ce réflexe autoritaire est fondé sur une erreur d’appréciation majeure de la situation. Comme avec Internet à sa création, l’État semble agacé que des moyens techniques en perpétuelle évolution et lui échappant permettent aux citoyens de s’exprimer et s’organiser.

Depuis sa création, La Quadrature l’observe et le documente : le réflexe du blocage, de la censure, de la surveillance traduit en creux une incapacité à comprendre les mécanismes technologiques de communication mais surtout révèle la volonté de limiter la liberté d’expression. En démocratie, seul un juge a l’autorité et la légitimité pour décider si un propos ou une image enfreint la loi. Seule l’autorité judiciaire peut décider de retirer un discours de la sphère publique par la censure.

Or, sur Internet, cette censure est déléguée à des entités privées dans un cadre extra-judiciaire, à rebours de cette protection historique. L’expression et l’information des utilisateur·rices se heurtent depuis des années aux choix de plateformes privées auto-désignées comme entités régulatrices du discours public. Ce mécanisme de contrôle des moyens d’expression tend en effet à faire disparaître les contenus militants, radicaux ou alternatifs et à invisibiliser l’expression de communautés minoritaires. Alors que ce modèle pose de sérieuses questions quant à la liberté d’expression dans l’espace démocratique, c’est pourtant bien sur celui-ci que le gouvernement compte pour faire tomber les vidéos de violences et de révoltes.

Ensuite, cette séquence démontre l’absence de volonté ou l’incompétence de l’État à se confronter aux problématiques complexes et anciennes des quartiers populaires et à apporter une réponse politique et sociale à un problème qui est uniquement… politique et social. L’analyse des évènements dans les banlieues est complexe, difficile et mérite qu’on se penche sur de multiples facteurs tels que le précédent de 2005, l’histoire coloniale française, le rapport des habitant·es avec la police ou encore le racisme et les enjeux de politique de la ville. Mais ici, le gouvernement convoque les réseaux sociaux pour contourner la situation. Comme à chaque crise, la technologie devient alors le *usual suspect* préféré des dirigeants : elle est facile à blâmer mais aussi à maîtriser. Elle apparaît ainsi comme une solution magique à tout type de problème.

Rappelons-nous par exemple de l’application TousAntiCovid, promue comme l’incarnation du progrès ultime et salvateur face à la crise sanitaire alors qu’il s’agissait uniquement d’un outil de surveillance inefficace. La suite a montré que seules des mesures sanitaires étaient de toute évidence à même de résorber une épidémie. Plus récemment, cette manœuvre a été utilisée pour les Jeux Olympiques, moment exceptionnel par ses aspects logistiques, économiques et sociaux mais où la réponse politique apportée a été de légaliser un degré supplémentaire de surveillance grâce à la technologie, la vidéosurveillance algorithmique.

Ici, le gouvernement se sert de ces deux phénomènes pour arriver à ses fins. Désigner les réseaux sociaux comme le coupable idéal lui permet non seulement de détourner l’opinion publique des problématiques de racisme, de pauvreté ou de politique des quartiers mais également de profiter de cette séquence « d’exception » pour asseoir sa volonté de contrôle d’Internet et des réseaux sociaux. Ainsi, les ministres de l’Intérieur et du Numérique ont convoqué le 30 juin les représentants de TikTok, Snapchat, Twitter et Meta pour leur mettre une « pression maximale », selon les mots du ministre du Numérique Jean-Noël Barrot, et renforcer ainsi la main mise et l’influence politique sur ces infrastructures de communication.

Une collaboration État-plateformes à son paroxysme

Comment le gouvernement peut-il alors faire pour réellement mettre les réseaux sociaux, des entreprises privées puissantes, sous sa coupe ? Juridiquement, il dispose de leviers pour demander lui-même le retrait de contenus aux plateformes. Certes cette censure administrative est en théorie aujourd’hui réservée à la pédopornographie et à l’apologie du terrorisme, mais cette dernière, notion vague et indéfinie juridiquement, a notamment permis d’exiger le blocage du site collaboratif et militant Indymedia ou de faire retirer une caricature de Macron grimé en Pinochet. Ces pouvoirs ont d’ailleurs été récemment augmentés par l’entrée en vigueur du règlement « TERREG », qui permet à la police d’exiger le retrait en une heure d’un contenu qualifié par elle de « terroriste ». Cependant, il ne semble pas que le gouvernement ait utilisé ces dispositions pour exiger le retrait des vidéos de révoltes. D’ailleurs, et peut-être plus grave, il n’en a probablement pas eu besoin.

Conscient des limites juridiques de son pouvoir, l’État peut en effet miser sur la coopération des plateformes. Comme nous l’avons évoqué, celles-ci ont le contrôle sur l’expression de leurs utilisateur·ices et sont en mesure de retirer des vidéos de révoltes et d’émeutes quand bien même ces dernières n’auraient absolument rien d’illégal, simplement en invoquant leurs larges pouvoirs issus des conditions générales d’utilisation.

D’un côté, les autorités peuvent compter sur le zèle de ces réseaux sociaux qui, après avoir longtemps été accusés de mauvais élèves et promoteurs de la haine en ligne, sont enclins à se racheter une image et apparaître comme de bons soldats républicains, n’hésitant pas à en faire plus que ce que demande la loi. Twitter par exemple, qui a pendant longtemps résisté et ignoré les demandes des autorités, a drastiquement changé sa discipline depuis l’arrivée d’Elon Musk. Selon le media Rest of World qui a analysé les données du réseau, Twitter ne refuse quasiment plus aucune injonction de blocage ou réquisition de données d’identification provenant des différents États dans le monde.

Concernant les récents évènements en France, le ministre Barrot a ainsi confirmé que les « demandes » du gouvernement avaient été « entendues ». Le Commandement de la Gendarmerie dans le cyberespace exposait fièrement que 512 demandes de retrait avaient été adressées aux modérateurs de réseaux sociaux, quand Olivier Veran annonçait quant à lui que « ce sont plusieurs milliers de contenus illicites qui ont été retirés, plusieurs centaines de comptes qui ont été supprimés, plusieurs dizaines de réquisitions auxquelles les plateformes ont répondu ».

Et en effet, Snapchat ne se cachait pas d’avoir fait plus que le nécessaire. Un porte-parole affirmait à l’AFP faire de la « détection proactive » notamment sur la carte interactive qui permet de retrouver des contenus en fonction des lieux et « et plus particulièrement le contenu lié aux émeutes » qui serait supprimé s’il « enfreint [leurs] directives ». La responsable des affaires publiques de l’entreprise assumait quant à elle devant l’Assemblée nationale avoir travaillé avec le ministère de l’Intérieur pour filtrer les contenus et ne laisser en ligne que ceux mettant en scène des personnes se plaignant des violences. Les représentants de Tiktok ont pour leur part [annoncé](https://www.latribune.fr/technos-medias/internet/les-reseaux-sociaux-retirent-des-milliers-de-contenus-illicites-lies-aux-emeutes-968771.html#:~:text=Mis en cause par de,des données personnels de policiers) : « Nous menons une modération automatique des contenus illicites, renforcée par des modérateurs humains. En raison de la nécessité urgente en France, nous avons renforcé nos efforts de modération ».

De l’autre côté, si les réseaux sociaux refusent de se plier à ce jeu diplomatique, alors le gouvernement peut menacer de durcir leurs obligations légales. Aujourd’hui, les réseaux sociaux et hébergeurs bénéficient d’un principe européen d’irresponsabilité, créé dans les années 2000 et reposant en France sur l’article 6 de la loi pour la confiance dans l’économie numérique (LCEN). Ils ne sont responsables de contenus que s’ils ont connaissance de leur caractère illicite et ne les ont pas retiré « promptement ». Mais alors que les soirées d’émeutes étaient toujours en cours dans les villes de France, le sénateur Patrick Chaize profitait de l’examen du projet de loi Espace Numérique pour proposer un amendement qui voulait modifier ce régime général et imposer aux plateformes le retrait en deux heures des contenus « incitant manifestement à la violence ».

Si cet amendement a finalement été retiré, ce n’était pas en raison de désaccords de fond. En effet, Jean-Noël Barrot a, dans la foulé de ce retrait, annoncé le lancement d’un « groupe de travail » interparlementaire pour réfléchir à une « évolution législative » de l’encadrement des réseaux sociaux. Sont envisagées pour l’instant des restrictions temporaires de fonctionnalités telles que la géolocalisation, des mesures de modération renforcées ou encore la levée de l’anonymat, éternelle marotte des parlementaires. Demande constante de la droite française depuis de nombreuses années, cette volonté de lier identité virtuelle et identité civile est cette fois-ci défendue par le député Renaissance Paul Midy. De quoi agiter le chiffon rouge de futures sanctions auprès de plateformes qui rechigneraient à en faire suffisamment.

L’impasse de la censure

Déjà voté au Sénat, le projet de loi « Espace Numérique » devrait être discuté à la rentrée à l’Assemblée. Outre plusieurs dispositions problématiques sur lesquelles nous reviendrons très prochainement, ce texte a comme objet initial de faire entrer dans le droit français le Digital Services Act (ou DSA). Ce règlement européen adopté en 2022 est censé renouveler le cadre juridique des acteurs de l’expression en ligne.

Contrairement à ce qu’affirmait avec aplomb Thierry Breton, commissaire européen en charge notamment du numérique, ce texte ne permettra en aucun cas d’« effacer dans l’instant » les vidéos de révoltes ni d’interdire d’exploitation les plateformes qui n’exécuteraient pas ces injonctions. Non, ce texte donne principalement des obligations ex ante aux très grosses plateformes, c’est-à-dire leur imposent des efforts sur leur fonctionnement général (transparence des algorithmes, coopération dans la modération avec des tiers certifiés, audits…) pour prévenir les risques systémiques liés à leur taille et leur influence sur la démocratie. M. Breton est ainsi prêt à tout pour faire le SAV du règlement qu’il a fait adopter l’année dernière, quitte à dire des choses fausses, faisant ainsi réagir plus de soixante associations sur ces propos, puis à rétropédaler en catastrophe en voyant le tollé que cette sortie a déclenché.

Cependant, si ce texte ne permet pas la censure immédiate rêvée par le commissaire européen français, il poursuit bien la dynamique existante de confier les clés de la liberté d’expression aux plateformes privées, quitte à les encadrer mollement. Le DSA légitime les logiques de censure extra-judiciaire, renforçant ainsi l’hégémonie des grandes plateformes qui ont développé des outils de reconnaissance et de censure automatisés de contenus.

Des contenus terroristes aux vidéos protégées par le droit d’auteur en passant par les opinions radicales, c’est tout un arsenal juridique européen qui existe aujourd’hui pour fonder la censure sur internet. En pratique, elle permet surtout de donner aux États qui façonnent ces législations des outils de contrôle de l’expression en ligne. On le voit en ce moment avec les vidéos d’émeutes, ces règles sont mobilisées pour contenir et maîtriser les contestations politiques ou problématiques. Le contrôle des moyens d’expression finit toujours aux mains de projets sécuritaires et anti-démocratiques. Face à ce triste constat, de la même manière que l’on se protège en manifestation ou dans nos échanges militants, il est nécessaire de repenser les pratiques numériques, afin de se protéger soi-même et les autres face au risque de détournement de nos publications à des fins répressives (suppression d’images, de vidéos ou de messages, flouter les visages…).

Enfin, cette obsession de la censure empêche surtout de se confronter aux véritables enjeux de liberté d’expression, qui se logent dans les modèles économiques et techniques de ces plateformes. À travers leurs algorithmes pensés pour des logiques financières, ces mécanismes favorisent la diffusion de publications violentes, discriminatoires ou complotistes, créant un tremplin rêvé pour l’extrême droite. Avec la régulation des plateformes à l’européenne qui ne passe pas par le questionnement de leur place prépondérante, celles-ci voient leur rôle et leur influence renforcé·es dans la société. Le modèle économique des réseaux sociaux commerciaux, qui repose sur la violation de la vie privée et la monétisation de contenus problématiques, n’est en effet jamais empêché, tout juste encadré.

Nous espérons que les débats autour du projet de loi Espace Numérique seront enfin l’occasion de discuter de modèles alternatifs et de penser la décentralisation comme véritable solution de la régulation de l’expression en ligne. Cet idéal n’est pas utopique mais existe bel et bien et grandit de jour en jour dans un écosystème fondé sur l’interopérabilité d’acteurs décentralisés, refusant les logiques de concentration des pouvoirs de censure et souhaitant remettre les utilisateurs au cœur des moyens de communications qu’ils utilisent.

Permalink
August 3, 2023 at 8:52:58 PM GMT+2

En France ou ailleurs, couper l’accès aux réseaux sociaux pour couper court aux émeutes ?https://theconversation.com/en-france-ou-ailleurs-couper-lacces-aux-reseaux-sociaux-pour-couper-court-aux-emeutes-209583

  • global spying
  • Police State
  • Social Network
  • Politics
  • global spying
  • Police State
  • Social Network
  • Politics

En France ou ailleurs, couper l’accès aux réseaux sociaux pour couper court aux émeutes ?

Publié: 26 juillet 2023, 20:17 CEST

La mort de Nahel, tué par un policier à Nanterre lors d’un contrôle routier le 27 juin 2023, a déclenché en France une série de manifestations violentes qui se sont rapidement étendues à tout le pays et ont même franchi les frontières nationales.

Les réseaux sociaux ont joué un rôle déterminant dans cette affaire. Il n’est donc pas surprenant que ces plates-formes soient devenues l’une des cibles des autorités françaises, Emmanuel Macron ayant évoqué la possibilité de couper l’accès aux réseaux sociaux durant des périodes de violences urbaines.

Les réactions à ces propos ont vite provoqué un rétropédalage du gouvernement, par l’intermédiaire de son porte-parole Olivier Véran, qui a déclaré que les restrictions pourraient se limiter à des suspensions de certaines fonctionnalités comme la géolocalisation.

Un débat qui agite aussi les instances internationales, comme l'ONU, qui s'interrogent sur le rôle des réseaux sociaux et sur la modération de contenus.

Le rôle des réseaux sociaux

Que les réseaux sociaux constituent, comme le soulignait déjà le Rapporteur spécial sur la liberté d’opinion et expression de l’ONU en 2011, « un instrument de communication essentiel au moyen duquel les individus peuvent exercer leur droit à la liberté d’expression, ou […] de recevoir et de répandre des informations » est un fait indéniable. C’est d’ailleurs une vidéo largement diffusée en ligne qui a permis de remettre en cause la version des faits sur la mort de Nahel initialement avancée par les policiers impliqués.

Mais les réseaux sociaux ont ensuite beaucoup servi à partager des vidéos, y compris d’épisodes violents, ainsi qu’à organiser et à géolocaliser les mobilisations et les endroits visés par les dégradations ou affrontements. D’où la réaction du gouvernement français, qui a tenu une réunion avec les représentants de Meta, Snapchat, Twitter et TikTok afin de les appeler à la responsabilité concernant la diffusion de tels contenus.

[Plus de 85 000 lecteurs font confiance aux newsletters de The Conversation pour mieux comprendre les grands enjeux du monde. Abonnez-vous aujourd’hui]

Les plates-formes étant devenues les « nouveaux gouverneurs » de la liberté d’expression, leurs politiques de modération ainsi que l’application de celles-ci se retrouvent scrutées de près. Or les règles en vigueur sont vagues et ne permettent pas une identification claire des contenus interdits ; en outre, l’usage de l’IA peut favoriser la discrimination, alimenter des inégalités sociales et conduire soit à une suppression excessive de contenus soit, à l’inverse, à la non-suppression de contenus qui vont à l’encontre du droit international des droits humains.

Parmi les exemples récents de l’incidence d’une modération de contenus opaque, citons le rôle de Facebook au Myanmar dans la propagation de discours haineux contre les Rohingya, mais aussi aux États-Unis lors de l’assaut du Capitole par les supporters de Donald Trump le 6 janvier 2021, suite à l’élection de Joe Biden.

Les réseaux sociaux ont, en vertu des Principes directeurs relatifs aux entreprises et droits de l’homme de l’ONU, la responsabilité de veiller au respect des droits humains dans le cadre de leurs activités. L’appel à la responsabilité de la part du gouvernement français en matière de modération des contenus n’est donc pas, en soi, problématique.

Le rôle des États

Les États ont la possibilité, dans certaines circonstances, de mettre en place des mesures susceptibles de restreindre l’exercice des droits fondamentaux tels que la liberté d’expression, par exemple en imposant des règles strictes aux réseaux sociaux ; mais ces restrictions doivent être conformes à leurs obligations internationales.

La France ayant ratifié le Pacte international sur les droits civils et politiques, toute restriction aux droits y énumérés doit correspondre aux dispositions établies dans ce traité.

Le Pacte précise que pour qu’une restriction à la liberté d’expression soit légitime, elle doit satisfaire trois conditions cumulatives : la restriction doit être « fixée par la loi » ; elle doit protéger exclusivement les intérêts énumérés à l’article 19 du Pacte (les droits ou la réputation d’autrui, la sécurité nationale ou l’ordre public, la santé ou la moralité publiques) ; et elle doit être nécessaire pour protéger effectivement l’intérêt légitime identifié, et proportionnée à l’objectif visé, ce qui signifie qu’elle doit compromettre le moins possible l’exercice du droit. Les mêmes conditions s’appliquent aussi aux restrictions aux droits à la liberté de réunion pacifique et libre association.

Or la proposition d’Emmanuel Macron peut précisément s’inscrire dans le cadre d’une restriction de la liberté d’expression, de la libre association et du droit à la réunion pacifique. Bien que cette idée soit présentée comme visant à protéger l’intérêt légitime du maintien de l’ordre public ou même de la sécurité nationale, de telles mesures ont été à plusieurs reprises jugées par les organisations internationales comme étant non conformes avec le droit international.

Le Rapporteur spécial sur la liberté d’opinion et d’expression de l’ONU a largement traité ce sujet. En 2017, il a souligné que les coupures d’Internet – qui peuvent être complètes ou partielles, c’est-à-dire n’affectant que l’accès à certains services de communication comme les réseaux sociaux ou les services de messagerie – « peuvent être expressément destinées à empêcher ou à perturber la consultation ou la diffusion de l’information en ligne, en violation […] des droits de l’homme » et que, « dans bien des cas, elles sont contre-productives ».

Le Rapporteur spécial sur la liberté de réunion pacifique et à la liberté d’association a pour sa part précisé en 2019 que « les coupures de réseau constituent une violation flagrante du droit international et ne peuvent en aucun cas être justifiées » et que « bien que ces mesures soient généralement justifiées par des raisons d’ordre public et de sécurité nationale, ce sont des moyens disproportionnés, et la plupart du temps inefficaces, d’atteindre ces objectifs légitimes ».

En 2021, une résolution du Conseil des droits de l’homme de l’ONU, dont le projet a notamment été porté par la France, condamne « fermement le recours aux coupures de l’accès à Internet pour empêcher ou perturber délibérément et arbitrairement l’accès à l’information en ligne ou sa diffusion ». La résolution demandait aussi au Haut-Commissariat des Nations unies aux droits de l’homme (OHCHR) de présenter une étude sur la tendance, observée dans plusieurs pays du monde, consistant à couper l’accès à Internet.

Le rapport de l’OHCHR, présenté au Conseil l’année suivante, souligne que « la grande majorité des coupures sont justifiées officiellement par le souci de préserver la sûreté publique et la sécurité nationale ou par la nécessité de restreindre la circulation d’informations jugées illégales ou susceptibles de causer des préjudices ». Cela a pu être constaté, entre autres exemples, au Burkina Faso lors des manifestations de l’opposition en novembre 2021, qui ont mené à une coupure d’Internet d’abord, puis à une restriction d’accès à Facebook, au nom de la sécurité nationale, ou au Sri Lanka en avril 2022, quand le gouvernement à coupé l’accès à toutes les plates-formes suite à des protestations contre la mise en place d’un état d’urgence.

Si ces restrictions ont généralement lieu dans des pays non démocratiques, les justifications avancées par leurs gouvernements correspondent à celles avancées par le gouvernement français à présent.

Le rapport note aussi qu’un nombre important de coupures d’Internet ont été suivies par des pics de violences, « ce qui semble démontrer que ces interventions ne permettent bien souvent pas d’atteindre les objectifs officiellement invoqués de sûreté et de sécurité » mais aussi qu’« on ne saurait invoquer la sécurité nationale pour justifier une action lorsque ce sont précisément des atteintes aux droits de l’homme qui sont à l’origine de la détérioration de la sécurité nationale ».

Par ailleurs, les manifestations trouvant leur origine dans les violences policières et le profilage racial, des mesures visant à restreindre l’accès aux réseaux sociaux en les accusant d’être responsables des violences constituent « une manière de dépolitiser et délégitimer la révolte [et] de dénier aux émeutiers le droit de se révolter contre les violences policières », comme le souligne le chercheur en sciences de l’information Romain Badouard.

Une question d’équilibre ?

Les États et les réseaux sociaux ont, les uns comme les autres, un devoir de protection et de respect des droits humains, mais comme nous l’avons vu, ils peuvent également porter atteinte à ces droits. Le cas présent montre que les deux centres de pouvoir, les États et les réseaux sociaux, peuvent, et idéalement devraient, se contrebalancer, afin d’assurer une meilleure protection des droits des individus.

C’est dans ce cadre qu’une approche de la modération des contenus en ligne fondée sur les droits humains se révèle nécessaire. Le Rapporteur spécial sur la liberté d’opinion et expression de l’ONU avait déjà remarqué en 2018 que « certains États […] recourent à la censure et à l’incrimination pour façonner le cadre réglementaire en ligne », mettant en place « des lois restrictives formulées en termes généraux sur l’"extrémisme", le blasphème, la diffamation, les discours “offensants”, les fausses informations et la “propagande” [qui] servent souvent de prétexte pour exiger des entreprises qu’elles suppriment des discours légitimes ». D’autre part, si les réseaux sociaux se présentent comme promoteurs de droits tels que la liberté d’expression, le Rapporteur spécial avait également relevé que la plupart d’entre eux ne se fondent pas sur les principes des droits humains dans leurs activités et politiques de modération de contenus.

Le cadre du droit international des droits humains offre non seulement aux individus la possibilité de contester les mesures prises par leurs gouvernements, mais il offre également aux réseaux sociaux un langage permettant de contester les demandes illicites émanant des États et d’« articuler leurs positions dans le monde entier de manière à respecter les normes démocratiques ». Reste aux États comme aux plates-formes à se saisir pleinement de ces instruments.

Permalink
August 3, 2023 at 8:52:13 PM GMT+2

Prompt Armageddon : le troisième récit. – affordance.infohttps://affordance.framasoft.org/2023/06/prompt-armaggedon-le-troisieme-recit/

  • Artificial intelligence
  • Fakeness
  • Datafication
  • Social Network
  • Societal Collapse
  • Artificial intelligence
  • Fakeness
  • Datafication
  • Social Network
  • Societal Collapse

Prompt Armageddon : le troisième récit.

Olivier Ertzscheid 20 juin 2023

En 2010 The Economist faisait sa Une autour du concept de “Data Deluge”. Jamais en 13 ans le déluge ne cessa. Il irrigua chaque pan de nos vies, se déclina dans une ininterrompue litanie d’applications et de métriques, alimenta l’ensemble des politiques publiques, constitua la part émergée de certaines au travers de l’Open Data, fit le lit de tous les cauchemars de surveillance, constitua l’unique et inique horizon de tant de rêves de maîtrise et d’anticipation.

Tout fut “Data” : Data-journalisme, Data-visualisation, Data-gouvernance … Tant de données qui ne sont qu’autant “d’obtenues” comme l’expliquait Bruno Latour, expliquant aussi pourquoi refusant de voir qu’elles n’étaient que cela, nous en avons été si peu capables d’en obtenir autre chose que quelques oracles pour d’improbables ou opportuns cénacles.

Tant de données mais si peu de possibilités de les manipuler au sens étymologique du terme. Il leur manquait en vrai une interface à façon. Cette interface que ChatGPT a révélé en grand et en public, et donc au “grand public”. La plus simple parce que paradoxalement la plus insondablement complexe en termes de combinatoire pour elle qui n’est qu’une prédictibilité statistique. Rivés à l’affichage écran des scripts que renvoie ChatGPT comme devant un bandit manchot de casino, nous nourrissons la croyance probabiliste que quelque chose de vrai, de réfléchi ou de sincère puisse s’exprimer. Et nous voyons la langue s’agencer devant nous. Le plus souvent pour ne rien dire, mais en le disant bien. Le plus souvent pour ne faire que réécrire ce qui est lu ailleurs en le réagençant à peine, mais en l’agençant bien et sans jamais nous citer ces ailleurs où la combinatoire s’abreuve.

L’interface de la langue, du prompt, du script existait déjà dans les moteurs de recherche au travers des requêtes, y compris vocales. Mais elle ne construisait pas un écho de dialogue. Elle était un puits plutôt qu’un miroir. Et surtout, elle ne le faisait pas sous nos yeux. Nous étions comme en voiture appuyant sur l’accélérateur et constatant l’augmentation de vitesse ; avec ChatGPT (et consorts) nous sommes au coeur du moteur, nous observons l’accélération en même temps que nous la ressentons et gardons l’impression d’un sentiment de commande et de contrôle.

L’apparence d’un miracle ludique et païen, voilà ce que sont ces interfaces langagières et ces artefacts génératifs. Qui se conjuguant demain aux autres interfaces thaumaturges qui mobilisent et équipent à la fois nos regards et nos gestes en plus de notre langue, nous donneront une puissance dont l’illusion n’aura jamais été aussi forte et claire, et l’emprise sur le monde aussi faible et déréalisante.

Data Storytelling et Prompt Clash.

A l’image de ce qui se produisit dans la sphère politique depuis le tout début des années 2010 – Barack Obama est élu pour la 1ère fois en 2008 -, avec le passage d’une ère du storytelling (basé entre autres sur de l’analyse de données) à une ère du clash (reposant sur une maîtrise rhétorique des discours médiatiques), c’est désormais l’ensemble de l’écosystème des discours médiatiques mais aussi d’une partie de plus en plus significative de nos interactions sociales avec l’information qui nous mène d’une société de la “data” à une société du “prompt” et du “script”. Une ère post-moderne puisqu’au commencement était la ligne de commande et que désormais nous commandons en ligne, des pizzas comme des dialogues et des interactions sociales.

Le Data-Deluge était à la fois un concept écran et un concept mobilisateur. Qui nous installait dans une posture de négociation impossible : puisque les données étaient partout, en tout cas supposément, alors il fallait accepter qu’elles gouvernent nos vies et la décision politique dans l’ensemble de ses servitudes économiques.

Après moi le déluge. Et après le déluge ?

Résumons : avant les “Data” il y avait le “moi”, qui préparait leur avènement. Un web dont le centre de gravité ne fut plus celui des documents mais des profils, dans lequel l’être humain était un document comme les autres, un web “social par défaut”, où l’egotrip devînt une puissance bien avant que le capitalisme charismatique des influenceurs et influenceuses ne devienne une catharsis de batteleurs publicitaires. Le moi puis la Data. Après moi le déluge.

Puis après le data-deluge, vînt l’infocalypse. L’apocalypse du faux. Infocalypse Now.

Et désormais partout des “prompts” et des “scripts” (pour ChatGPT ou pour d’autres) qui nourrissent les machines autant qu’ils épuisent le langage. Que nous disent ces passages de saturation médiatique et informationnelle des discours autour de la “Data”, puis de “l’infocalypse” puis du “prompt” mais aussi des divers “Métavers” ?

Prompt Armaggedon.

Dans mon article précédent au sujet du casque “Apple Vision Pro” j’expliquais que par-delà les avancées technologiques il s’agissait avant tout de fournir une fonction support à un monde devenu littéralement insupportable.

Grégory Chatonsky formule l’hypothèse selon laquelle “L’IA a permis à la Silicon Valley de se relancer politiquement. Dans un contexte d’extinction planétaire, les technologies apparaissaient de plus en plus problématiques. Il lui a suffit de métaboliser l’extinction dans l’IA elle-même, en prétendant que cette technologie constituait le plus grande danger, pour continuer coûte que coûte sa production idéologique.”

Les technologies liées ou associées à diverses formes “d’intelligence artificielle” nous promettent un méta-contrôle de mondes qui ne sont ni le monde, ni notre monde. Elles poussent le curseur jusqu’à l’acceptation de l’idée que ce méta-contrôle nous dispense de notre puissance d’agir sur le réel sans entraîner de culpabilité mortifère puisqu’il s’agirait de s’inscrire dans une sorte d’élan vital technologique (bisous #Vivatech)).

On a souvent expliqué que deux grands récits technologiques s’affrontaient : celui d’une émancipation par la technique d’une part (le solutionnisme technologique), et celui d’un Armaggedon de robots ou d’IA tueuses d’autre part (en gros hein). Et que ces récits étaient exclusifs l’un de l’autre. Il est d’ailleurs intéressant de voir dans ce cadre l’émergence programmatique de figures troubles comme celle d’Elon Musk, qui sont des go-between capables d’affirmer tout à la fois que l’IA est la plus grande menace pour l’humanité et d’en faire le coeur du développement technologique de ses diverses sociétés et produits.

Il semble aujourd’hui que ces récits non seulement s’entrecroisent mais qu’ils sont plus fondamentalement et politiquement au service d’un troisième récit. Ce troisième récit est celui d’un retrait du monde. Pour le dire trivialement, puisqu’il est une hypothèse que la technique nous sauve, puisqu’une autre veut qu’elle nous condamne, et puisqu’il semble qu’il n’y ait à l’avance aucun scénario fiable d’équilibre dans les décisions à prendre, alors chaque développement technologique indépendamment de sa nature de “pharmakon” (il est à la fois remède et poison) comporte, affiche, vend et décline en lui la possibilité d’un retrait du monde qui travaille son acceptabilité au travers de promesses de puissance singulières. Le Métavers, les casques de réalité virtuelle ou augmentée, mais aussi ces LLM (Large Language Models) sont autant de ces promesses de puissances singulières. Or plus les Big Tech nous fournissent des environnements et des interfaces thaumaturges, et plus ils travaillent à nous installer dans une forme de déprise. Plus nous “commandons” de choses et de réalités alternatives, virtuelles ou mixtes, plus ces commandes passent par chacun de nos sens (toucher, vue, voix), plus les recherches avancent pour faire encore reculer cette dernière interface possible que constitue notre cerveau ou les ondes cérébrales, et moins nous sommes en prise sur le réel.

Le premier grand récit des technologies numériques consiste à prôner un solutionnisme suprémaciste, qui écrase tout horizon de contestation ou de débat soit au motif d’une vision fantasmée d’un “progrès” oublieux de ses externalités négatives, soit (le plus souvent) pour des raisons purement économiques de maintien d’effets de rente qui relèguent toute considération éthique à la dimension d’un essai clinique piloté par Didier Raoult.

Le second grand récit des technologies numériques est un renouvellement du Luddisme mâtiné de récits d’un effondrement dans lequel chaque pan de technologie n’aurait, là aussi de manière essentiellement caricaturale, que vocation à se retourner contre tout ou partie de l’humanité ou, de manière plus vraisemblable, la condamnerait à surexploiter le même cycle qui conduit déjà au dépassement des limites planétaires.

Le troisième grand récit des technologies numériques est un méta-récit. Une sorte de théorie des cordes (version Wish), dans laquelle le rapport de puissance s’inverse. Après un temps où les technologies numériques devaient permettre d’alléger notre cadre et nos tâches cognitives en s’y substituant (externalités mémorielles) ; après un temps où beaucoup d’entre nous parmi les plus pauvres, les plus fragiles, les moins qualifiés finirent par devenir les supplétifs à bas coût de technologies initialement présentées comme capables de les émanciper et qui ne firent que les aliéner (en gros le Digital Labor) ; nous voilà désormais dans un système capitalistique à l’équilibre mortifère où l’invisibilisation des processus extractifs, tant sur le plan des données que sur celui des sujets, est à son apogée. Il s’agit pour lui maintenant de décliner son extractivisme à l’ensemble des réalités alternatives existantes et accessibles (augmentée, virtuelle, mixte) en nous mettant en situation de pouvoir épuiser jusqu’à la langue elle-même au travers de prompts qui tendent à fabriquer une inépuisable fatrasie de textes dont la seule vocation n’est plus que de nourrir les générateurs qui permettent d’en produire de nouveaux. Jusqu’à épuisement. Ces réalités, augmentées, virtuelles et mixtes, sont tout autant fossiles que les énergies du monde qui s’épuise. Peut-être le sont-elles même encore davantage.

One More Thing : Interfaces Humains Informations.

Les IHM, “interfaces homme-machine” constituent un champ de recherche à part entière, qui mobilise des notions venant aussi bien de l’ergonomie, du design, de l’informatique, ou de la psychologie. Il nous faut réfléchir à la constitution d’un champ de savoir autour de ce que l’on pourrait appeler les IHI, l’analyse des interfaces (sociales, techniques, cognitives) entre l’humain (multi-appareillé ou non) et l’Information (dans sa diffusion, ses relais et sa compréhension). Un champ dans lequel les “sciences de l’information et de la communication” ont toute leur place mais ne sont pas seules légitimes. Les nouvelles “humanités numériques” sont avant tout informationnelles et culturelles.

Constamment le numérique oblitère des espaces de négociation et de recours. Les lois les plus structurantes mises en oeuvre ne sont pas celles qui tendent à le réguler mais celles qui le légitiment comme instance de contrôle. Les déploiements les plus structurants qui sont portés par la puissance publique ne visent pas pas rapprocher et à faciliter mais à éloigner et à suspecter. Le numérique dans son agrégation mortifère autour d’intérêts privés et d’architectures techniques toxiques à échoué à devenir une diplomatie de l’intérêt commun.

Et si la diplomatie est une théorie des relations entre les États, alors il faut que le champ des IHI, des “interfaces humain information”, devienne une diplomatie qui soit une théorie des relations entre les états de l’information et de sa circulation dans le champ social comme dans celui de la perception individuelle.

Permalink
July 4, 2023 at 10:37:26 PM GMT+2

faceted. - Reddit: Killing a Giant by Bill | living life in SFhttps://www.faceted.social/p/3169aedd-3f06-4db4-b51c-ed4690e9f5a6

  • Social Network
  • Big Corpo
  • Social Network
  • Big Corpo

Reddit: Killing a Giant

Profile picture. Bill | living life in SF • Jun 16

It's understandable to be defeatist about the recent Reddit protests. The recent subreddit blackouts weren't 100% popular and similar attempts to migrate away from Twitter have hit road bumps. The network effect of giant platforms seem insurmountable, but giants have fallen before and will continue to do so. Having some recent experience in the industry, I wanted to give a fact-based analysis and answer to: how do you kill a giant and what does it look like when it falls?

If you're confused about what's happening on Reddit, check out this link.

To start, I want to talk about "migrations" from platforms, and how they often aren't what they seem. It's easy to think of these giants dying overnight, because to us they essentially do. One day, we visited MySpace or Digg for the last time, and then never looked back. However, the data says otherwise: killing a giant takes time.

In 2010, Digg introduced v4 which completely changed the UI and added a lot of features that users hated. In hindsight, this was the downturn for Digg, but at the time, the story was a bit more complicated.

After the launch of v4, Digg traffic did drop dramatically. However, one year after launch, Digg still had 8.5 million monthly US visitors compared to Reddit's 13.7 million monthly US vistors. According to traffic estimates, Reddit traffic didn't surpass Digg traffic until December 2011 over one year later after v4's launch.

Similarly, Facebook surpassed MySpace in unique visitors from the US in 2009, but MySpace still had 36 million users in 2013. The data says the same thing: giants take time to fall.

At the time, it's very likely that people visited both Reddit and Digg at the same time. At the end of the day, the people that continued to visit Digg did not end up saving Digg. If and when Reddit begins falling, checking it out a few times a day won't save it either.

Also, when Digg's traffic fell, Reddit traffic did not rise the same amount. A lot of people who disliked Digg's change simply moved on altogether, they didn't move to Reddit (it was a bit of a hard sell for some). Ultimately, Reddit didn't succeed Digg by attracting all the old Digg users, but by building their own community with their own users.

So how do you attract your own users? Successful platforms build a beachhead with a niche. Reddit was the underdog focused on tech, and Facebook was the underdog for college students. The competitor needs to be "the" place for some community. A great example of this is Discord, where for certain niches "the" community exists only on Discord.

Eventually, once a large enough network effect is established and Reddit continues to implement habit-breaking changes on their platform, people will start to choose and recommend the alternative option over Reddit. Thus the giant dies.

One corollary to building a niche, is that cargo-culting Reddit will likely not work. That is to say, simply creating the same communities from Reddit in your new alternative and hoping the content creators come is not a winning strategy. Community needs to be grown organically, and simply copying Reddit will not work.

Similarly, I don't think you need to have an exact feature match to kill Reddit. You see this a lot when discussing the Reddit alternatives: "Oh I'd use it if it has a mobile app," "oh I'd use it if it looks more like Reddit." If you have the community people will come, and they will get over the fact that you don't have a mobile app. If you don't have the community, no amount of cool features will get people to come over and stay.

Permalink
July 3, 2023 at 10:47:59 PM GMT+2

Black Mirror : le narcissisme à l’ère du numérique | Cairn.infohttps://www.cairn.info/revue-le-carnet-psy-2017-1-page-27.htm?ora.z_ref=li-92683689-pub

  • Societal Collapse
  • Social Network
  • Big Tech
  • Societal Collapse
  • Social Network
  • Big Tech

Black Mirror : le narcissisme à l’ère du numérique

Lise Haddouk

Dans Le Carnet PSY 2017/1 (N° 204), pages 27 à 29

La cyberculture offre des productions passionnantes, tant sur le plan artistique que scientifique. Terme apparu au début des années 90, la cyberculture désigne usuellement une certaine forme de culture qui se développe autour d’internet. Selon la Wikipedia, la cyberculture englobe des productions très diverses présentant un lien avec les TIC (Technologies de l’Information et de la Communication), notamment le multimédia, dont les œuvres mélangent image, son et programmation. Mais la notion de cyberculture va au-delà d’un genre culturel. Elle désignerait : « un nouveau rapport au savoir, une transformation profonde de la notion même de culture », voire une intelligence collective, dont la Wikipedia pourrait justement servir d’exemple. Cette révolution culturelle marquerait aussi : « l’avènement de la culture-monde » ou encore de la World philosophie.

A l’ère de la post, voire de l’hyper-modernité, les écrans sont devenus totalement indispensables dans nos vies quotidiennes. L’observation des usages que nous faisons de ces écrans renvoie souvent à celle des risques liés à des durées excessives d’utilisation, ou encore à la violence des images qui circulent et à une certaine déshumanisation des relations médiatisées par ordinateur, ou relations digitales. Sous certains aspects, ces usages renvoient à un versant narcissique de la personnalité, désignée par Lasch (1979) sur un plan sociétal comme : « la culture du narcissisme ».

De nombreux exemples des dérives possibles liées à des usages toxiques des écrans sont donnés dans la série Black Mirror. Précisons que la série, devenue un format particulièrement apprécié et adapté aux écrans de télévision, puis d’ordinateurs, constitue une production culturelle à part entière, pouvant donc être révélatrice des valeurs de notre société. Black Mirror (2011) est une série télévisée britannique, créée par Charlie Brooker. Les épisodes sont reliés par un thème commun, la mise en œuvre d’une technologie dystopique. Le créateur explique que le titre de la série fait référence à la technologie que nous considérons comme une drogue : « Si c’est une drogue, alors quels en sont les effets secondaires ? C’est dans cette zone entre joie et embarras que Black Mirror se situe. Le Black Mirror du titre est celui que vous voyez sur chaque mur, sur chaque bureau et dans chaque main, un écran froid et brillant d’une télévision ou d’un smartphone. Chaque épisode a un casting différent, un décor différent et une réalité différente, mais ils traitent tous de la façon dont nous vivons maintenant et de la façon dont nous pourrions vivre dans 10 minutes si nous sommes maladroits. » Par définition, l’écran sert à projeter quelque chose et donc à attirer le regard. La rétine est d’ailleurs elle-même considérée comme un écran. Supports de projection, les écrans d’aujourd’hui sont souvent utilisés comme des miroirs, et cette série dénonce les aspects déshumanisants des TIC et les usages extrêmement violents qui pourraient en être faits, dans une société très proche de la nôtre. L’écran noir, support de projections fantasmatiques potentiellement violentes et archaïques, semble pouvoir stimuler la pulsion scopique de certains utilisateurs et mener à des dérives, telles que celles figurées dans la série Black mirror.

L’écran peut ainsi être utilisé comme un miroir dans la relation qu’il permet d’établir avec les autres, « virtuels ». Dans ce cas, l’aspect narcissique risque de prédominer sur la relation, pouvant entraîner diverses conséquences, telles que la dépendance à cet « écran-miroir », ou encore le renforcement de l’isolement des utilisateurs, dans une forme d’auto-satisfaction. On pourrait alors parler des risques de la « relation digitale non objectale », ou « relation digitale narcissique ».

Mais au-delà des écrans, l’une des problématiques actuelles en lien avec les usages des TIC concerne le robot. Le passage de l’écran au robot révèle une forme de corporéisation de l’ordinateur, qui adopte une apparence plus ou moins humaine. Le robot dispose d’un corps, ce qui enrichit la palette des interactions sensorielles possibles, et donc probablement un sentiment de présence intersubjective, par rapport à une dimension plus spéculaire et narcissique des écrans-miroirs. Ainsi, le sentiment de présence est évoqué dans de nombreux travaux en cyberpsychologie et il représente l’un des vecteurs par lesquels on pourrait évaluer la qualité de la relation digitale, plus ou moins objectale. Cependant, les aspects sensoriels des robots humanoïdes accessibles actuellement sur le marché français, tel que Nao, restent encore assez limités. Au contraire, les robots du Professeur Ishigiro sont terriblement humains. Au sujet des relations digitales entre les humains et les robots, une autre série propose dans une fiction, cependant très réaliste, différents scénarios.

Real Humans : 100 % humain (2012) est une série télévisée dramatique suédoise créée par Lars Lundström. La série se déroule dans une Suède contemporaine alternative, où l’usage des androïdes devient de plus en plus prépondérant. Ces androïdes - appelés « hubots » dans la série - ont investi les maisons et les entreprises pour aider dans les tâches domestiques et industrielles. Les hubots, acronyme formé de humain et robot, ont : un port USB au niveau de leur nuque, de sorte qu’ils puissent être programmés, une prise électrique escamotable sous l’aisselle gauche, et une fente port micro SD à sa proximité. Ils sont utilisés comme domestiques, ouvriers, compagnons et même comme partenaires sexuels, bien que la législation du pays l’interdise. Mais des logiciels pirates de plus en plus sophistiqués leur ont aussi permis d’avoir des sentiments et des pensées. Certains hubots sont en réalité des clones d’humains, auxquels on a ajouté leur mémoire. Cette installation leur permet de devenir presque immortels, dans ces corps de robots, et ils sont recherchés par la police pour être étudiés et détruits. Tandis que certaines personnes adoptent cette nouvelle technologie, d’autres ont peur et redoutent ce qui pourrait arriver quand les humains sont peu à peu remplacés comme travailleurs, comme compagnons, parents et même amants.

Cette fiction évoque le travail de Turkle sur les relations homme-machine, notamment dans son ouvrage Seuls ensemble. Selon l’auteur, nos usages d’internet nous ont préparé au « moment robotique » actuel. En ligne, le privilège est accordé à notre capacité à partager nos idées, mais nous oublions facilement l’importance de l’écoute, des silences, du sens d’une hésitation. Ainsi, « les satisfactions “comme si” du moment robotique » interrogent sur le fait qu’en devenant amis avec les robots, nous perdrions de notre humanité. L’investissement massif des robots de compagnie nous conduirait à un « voyage vers l’oubli » des valeurs fondamentales de notre humanité car par essence, le robot ne mourra jamais. Ainsi, « l’artificiel permet de créer un attachement sans risques » et nous éloigne donc de ce qui caractérise les relations humaines, fondamentalement marquées par le manque, la mort et la séparation. Tout en prenant en considération ces réflexions particulièrement importantes à l’heure actuelle, on peut s’interroger sur la possibilité d’envisager d’autres usages des TIC, plus humanistes, en renforçant la dimension intersubjective dans les interactions à distance ?

La cyberpsychologie est une discipline émergente qui étudie les liens possibles entre la psychologie et les technologies numériques. En cyberthérapie, plusieurs protocoles de recherche et de soin sont déjà réalisés dans différents pays, notamment pour le traitement de troubles psychopathologiques par exposition à des environnements en réalité virtuelle, ou encore par la médiation psychothérapeutique par le jeu vidéo, et aussi la prise en charge psychothérapeutique de patients à distance en visioconférence. Ces nouvelles méthodes psychothérapeutiques sont encadrées en Amérique du Nord par un guide de pratique publié en 2013. Face à l’émergence de ces types d’usages des technologies, on peut s’interroger sur la dimension plus ou moins intersubjective des relations digitales, en ayant notamment recours au sentiment de présence pour nourrir cette réflexion.

Le sentiment de présence et l’immersion sont deux concepts qui intéressent de plus en plus de chercheurs en « réalité virtuelle ». La « présence » évoque souvent un sentiment associé à l’immersion en « réalité virtuelle » et encouragé par cet environnement. La capacité de la personne à se sentir « enveloppée » ou « présente » dans un « environnement virtuel » semble être nécessaire, particulièrement en psychologie, afin d’offrir des services thérapeutiques de qualité par l’entremise de la « réalité virtuelle ». La présence est traditionnellement définie par la perception psychologique d’être « là », à l’intérieur de l’environnement virtuel dans lequel la personne est immergée. Mais bien que les chercheurs s’entendent sur cette définition, chacun ajoute des nuances quelque peu différentes à celle-ci. On note que les technologies actuelles en cyberpsychologie impliquent la plupart du temps des acteurs humains qui utilisent des machines, tant du côté des psychologues que des patients. Cependant, on voit émerger des projets de recherche permettant l’élaboration d’avatars psychologues qui pourraient réaliser un diagnostic psychopathologique. En menant cette réflexion un peu plus loin, on peut déjà imaginer un robot psychologue…

Face à toutes ces questions et afin de limiter l’aspect narcissique de l’écran-miroir, l’apport de la psychologie clinique et de sa dimension éthique semble nécessaire, afin d’enrichir le champ de la cyber- psychologie. Ainsi, une réflexion clinique en cyberpsychologie peut apporter des éléments de réponse afin de renforcer et de préserver la dimension intersubjective, dans les interactions offertes par la technologie.

Pour sortir de l’impasse du narcissisme, Lasch faisait appel à la théorie des « objets transitionnels ». Ainsi, les objets transitionnels aident l’enfant à reconnaître le monde extérieur comme quelque chose de distinct de lui, bien que relié à lui. Mais ce caractère transitionnel serait manquant dans les sociétés de consommation, qui ne laisseraient que rarement une place à la frustration et au manque, facteurs contribuant à l’élaboration de la pensée. Cette théorie est très utile pour analyser les usages actuels des TIC. Ainsi, favoriser le caractère transitionnel d’internet se distinguerait d’un « usage narcissique » de cette technologie. Pour décrire la constitution du sujet psychique, Winnicott a discuté le stade du miroir, en y apportant un sens différent de celui du miroir spéculaire décrit par Lacan. Ainsi, l’espace potentiel créé entre le regard de la mère comme miroir et l’enfant, constitue un espace de création du sujet. Cet espace potentiel est aussi une « aire de séparation », qui permet d’aller à la rencontre du « soi ». Le premier miroir, c’est donc le visage de la mère.

Cet aspect subjectivant du regard se retrouve dans le cadre de la visioconsultation, en tant que relation à distance pouvant inclure un tiers humain symboliquement présent et s’illustrer dans un échange interactif et intersubjectif, que l’on peut qualifier de « relation digitale objectale », ou « relation digitale intersubjective ». L’objectif de l’expérience en visioconsultation a été, dès la conception du dispositif, de favoriser l’établissement d’une relation d’objet à distance, ce qui a semblé possible dès les premiers résultats. Ce type de relation digitale paraît occuper une pleine réalité, notamment sur le plan psychique, et on ne peut donc pas la qualifier de « virtuelle ».

Mis en ligne sur Cairn.info le 30/01/2017

https://doi.org/10.3917/lcp.204.0027

Permalink
July 1, 2023 at 11:36:49 AM GMT+2

Influencer Caryn Marjorie is competing with her own AI to chat with fans - Los Angeles Timeshttps://www.latimes.com/entertainment-arts/business/story/2023-06-27/influencers-ai-chat-caryn-marjorie

  • Artificial intelligence
  • Attention Economy
  • Societal Collapse
  • Social Network
  • Artificial intelligence
  • Attention Economy
  • Societal Collapse
  • Social Network

Thousands chatted with this AI ‘virtual girlfriend.’ Then things got even weirder

By Brian Contreras Staff Writer June 27, 2023 5:30 AM PT

Last month, Caryn Marjorie went from a successful but niche social media star to a person of national interest: the subject of attention-grabbing headlines and, for many commentators, a template upon which to project their anxieties about rapidly advancing artificial intelligence.

The cause of the furor was a partnership Marjorie, 23, had launched with a technology startup promising to make a personalized AI “clone” of the Scottsdale, Ariz.-based lifestyle influencer. For a dollar a minute, fans she might never have otherwise had the time to meet could instead chat with Marjorie’s digital double.

CarynAI, as the audio chatbot has been dubbed, is explicitly framed as a romantic companion — one that aims to “cure loneliness” with software that supposedly incorporates aspects of cognitive behavioral therapy into its conversations. Marjorie said her fans have used the program to ask for life advice and roleplay a sunset date to the beach.

Marjorie was at one point tracking her subscriber growth in tweets about how many new “boyfriends” she had. “They feel like they’re finally getting to know me, even though they’re fully aware that it’s an AI,” she told The Times.

i have over 20,000 boyfriends now 💗

— Caryn Marjorie (@cutiecaryn) May 20, 2023

This HAL 9000 version of pillow talk has, predictably, triggered a backlash. Critics branded CarynAI as alternately demeaning women, enabling antisocial straight-male behavior or signaling impending societal collapse. Coming amid a period of uncertainty about what AI means for jobs, relationships and cultural institutions, Marjorie’s move toward self-automation seemed to perfectly encapsulate an increasingly bizarre present.

“We’re talking about an AI system [where] theoretically the goal is to keep people on as long as possible so that you continue earning money,” said Amy Webb, chief executive of the consulting firm Future Today Institute. “Which means that it’s likely going to start incentivizing behavior that we probably would not want in the real world.”

Webb suggested as an example a bot that’s too obedient — listening passively, for instance, as a user describes disturbing fantasies. Marjorie has addressed similar dynamics before (“If you are rude to CarynAI, it will dump you,” she tweeted at one point), but when asked about Webb’s perspective she instead emphasized her own concerns about addiction.

“I have seen my fans spend thousands of dollars in a matter of days chatting with CarynAI,” Marjorie said; one fan, at the bot’s encouragement, built a shrine-like photo wall of her. “This is why we have limited CarynAI to only accepting 500 new users in per day.”

As AI comes to play a growing role in the economy, and especially creative industries, the questions prompted by CarynAI will only become more widespread.

But Marjorie isn’t placing all her chips on the technology just yet. Within weeks of announcing her AI clone, she launched a second partnership with a different tech company. This one too would let fans talk with her, but instead it would be Marjorie herself on the other side of the screen.

She struck a deal with Fanfix, a Beverly Hills-based platform that helps social media creators put their premium content behind a paywall, and started using its messaging tools to chat directly with customers.

The result is essentially a two-tier business model where lonely guys looking for a 3 a.m. chat session can talk with Marjorie’s machine mimic, while die-hard fans willing to shell out a bit more can pay for the genuine article.

That within the span of a few weeks Marjorie launched two different, seemingly contradictory business ventures — both aimed at turning fan conversations into money — speaks to a central question of an AI-obsessed moment: With robots increasingly entangled in creative industries, what work should be asked of them and what should be left to us?

Marjorie’s hybrid model offers a preview of one possible path forward.

Users pay a minimum of $5 to send her a message on Fanfix, said co-founder Harry Gestetner. That pricing difference — $5 for one human-to-human text versus $1 for a minute of the AI voice-chatting — signals an approach to automation in which workers use machine learning not as a wholesale replacement but as a lower-end alternative for more frugal customers. (Think of an artisanal farmers market cheese versus a machine-made Kraft Single.)

“Messaging directly with a fan on Fanfix will always be a premium experience,” Gestetner said. “It’s important to view AI as the co-pilot, not the pilot.”

(According to Fanfix, Marjorie is making $10,000 a day after soft-launching on the platform and is projected to hit $5 to $10 million in total earnings by the end of the year.)

John Meyer, founder of Forever Voices, the Austin software company that developed Marjorie’s AI simulacrum, is naturally a bit more bullish on the benefits of punting fan interactions to the computer. In some cases, Meyer said, the bots can be more eloquent than the influencers they’re meant to replicate.

“One of the first feelings it brings up is the idea of, like, ‘Wow, should I be threatened by my own AI copy?’” Meyer said.

He lost his father when he was in his early 20s, and started working on the Forever Voices technology late last year as a means of reconnecting. After developing a voice replica of his dad — which he describes as “very realistic and healing” — Meyer expanded into voice clones of various celebrities and, more recently, web personalities. (One of the biggest names in online livestreaming, Kaitlyn “Amouranth” Siragusa, just signed up.)

The company has been inundated with requests from thousands of other influencers asking for their own AI clones, according to Meyer. “We really see this as a way to allow fans of influencers to connect with their favorite person in a really deep way: learn about them, grow with them and have memorable experiences with them,” he said.

The high demand is in part because maintaining a substantial online following can involve a lot of work — not all of it particularly interesting.

“On a daily basis, I see anywhere from 100,000 to half a million messages on Snapchat,” Marjorie said, explaining the workload that led her to embrace CarynAI. (She has 2 million followers on the messaging app; according to a recent Washington Post article, 98% of them are men.)

She added: “I see AI as a tool, and it’s a tool that helps creators create better content.”

Some of her industry peers are skeptical, however, including Valeria Fridegotto, a TikToker with 20,000 followers.

Fridegotto hasn’t written off the technology completely, though. Software that could lessen the workload of fan interaction would be great, she said, but the examples she’s seen released so far don’t seem lifelike enough to run without supervision. There still are too many errors and non sequiturs — what AI experts call “hallucinations.”

“It has to be developed to the point where we are very confident that this technology is going to act as good as us, or better,” Fridegotto said.

As the market floods with imitators, some influencers may even discover renewed demand for “old-school” human-made content.

“People will start leaning more heavily into authentic, personality-driven content,” said Jesse Shemen, the chief executive of Papercup, a startup that uses AI to automatically dub videos. “In the same way how there’s this fascination and big following behind organic food … I think we’ll see the same thing when it comes to content.”

There is a place for automation on social media, Shemen added, especially for people churning out loads of content on a short timeline — news reaction videos, for instance. But, he predicted, there will be a limited market for digital clones such as Marjorie’s.

Still, a space as frothy as AI is hard to ignore. Even Fanfix, the company helping (the real) Marjorie talk to her super fans, is interested. The company’s founders say they’re actively looking at how AI could help influencers.

Although the influencer economy still needs actual humans, the limits of what AI can do are receding, and many web personalities are getting more and more interested in using the technology to automate at least some of their workload.

Such questions are not confined to social media. Artificial intelligence is being rolled out across creative industries, with media outlets such as Buzzfeed incorporating it into their publications and film studios leveraging it for postproduction work. AI-based screenwriting has emerged as a key concern in the ongoing Writers Guild of America strike.

But social media is uniquely personality-driven, making the sector’s relationship with AI particularly fraught. The value of web personalities depends on their ability to win trust and affinity from their followers. That connection can be so powerful that some experts refer to it as a “parasocial relationship” — a strong but ultimately one-sided devotion to a public figure.

It’s a tricky dynamic to navigate, and one Marjorie finds herself in the midst of.

“In the world of AI, authenticity is more important than ever,” the influencer tweeted last month. “My tweets, [direct messages], direct replies, Snaps, stories and posts will always be me.”

CarynAI, she added, will be an “extension” of her consciousness; it “will never replace me.”

Permalink
June 29, 2023 at 10:20:12 AM GMT+2

Par-delà le like et la colère. – affordance.infohttps://affordance.framasoft.org/2022/03/par-dela-like-colere/

  • Social Network
  • Attention Economy
  • Dark Pattern
  • Datafication
  • Big Data
  • Social Network
  • Attention Economy
  • Dark Pattern
  • Datafication
  • Big Data

Par-delà le like et la colère.

Olivier Ertzscheid 21 mars 2022

Industries de l'aliénation.

Il ne fait aujourd'hui aucun doute que l'industrie du tabac fut toujours consciente de la dangerosité des produits qu'elle écoulait. Comme il ne fait aucun doute que cette même industrie, de la campagne publicitaire des "flambeaux de la liberté" dans les années 1930 jusqu'à celles du Cowboy Marlboro dans les années 1980, fit toujours passer pour un vecteur d'émancipation ce qui était à la fois un poison et un vecteur d'aliénation.

Il ne fait aujourd'hui aucun doute que l'industrie pétrolière fut toujours parfaitement consciente des effets délétères de son extractivisme forcené sur le climat et qu'elle chercha là aussi constamment à en nier les effets en jouant à la fois de lobbying politique, de désinformation médiatique et de corruption financière.

Dans une dizaine d'années, et peut-être même avant cela au rythme actuel des scandales qui se succèdent, il ne fera absolument aucun doute que les grandes firmes technologiques de "médias sociaux" étaient également parfaitement conscientes des effets délétères de leurs "services" sur la démocratie, et qu'elles ont toujours rivalisé d'un cynisme aveugle et mortifère pour présenter comme des outils d'émancipation ce qu'elles organisaient pour répondre uniquement à logiques d'aliénation servant un modèle économique lui-même tout à fait insoutenable sans sa part maudite ; part maudite qui repose sur des captations de valeurs et de données aussi indues que disproportionnées à l'échelle de l'efficience du déploiement des services proposés.

Depuis son annus horribilis de 2018 (scandale Cambridge Analytica, piratage et fuite massive de données personnelles, recours à une agence de RP aux pratiques mafieuses, etc.) les polémiques et scandales ne cessent de s'enchaîner et la vie du PDG de Facebook est rythmée de sommations à comparaître et à s'expliquer devant les assemblées élues de tout un ensemble de pays, à commencer par le sien.

Les dernières révélations en date sont celles de la lanceuse d'alerte Frances Haugen qui démontre et documente plusieurs faits. D'une part le régime à la fois arbitraire et discrétionnaire qui, selon que vous serez puissants (grand compte à forte notoriété) ou misérable, vous dispensera de certaines règles s'appliquant dans le cadre des CGU de la firme en termes de modération. Ensuite, que la polarisation tellement reprochée à la firme est consciente et instrumentale, et non le résultat d'un algorithme souvent commodément présenté comme une sorte causalité autonome. En effet si les discours polarisant l'opinion, si les avis clivants, si les discours capables de déclencher un sentiment de colère, d'indignation et parfois de haine sont tellement présents sur la plateforme, c'est parce qu'elle a choisi, choisi, d'affecter aux 6 émoticônes (inspirées des 6 émotions fondamentales de Paul Ekman) des valeurs différentes : la colère vaut ainsi 5 "points" alors que le like n'en vaut qu'un seul.

Frances Haugen montre également que la firme concentre ses efforts de modération (algorithmique et humaine) principalement sur les USA, dans une bien moindre mesure sur l'Europe, et qu'elle néglige en quantité (de modérateurs) comme en qualité (linguistique) tout un tas de pays où le réseau social est pourtant très fortement implanté et qui sont pour beaucoup dans des situations de quasi guerre civile ou bien aux mains de gouvernements a minima très autoritaires. Dans ce cadre là, l'explosion des discours de haine contre des minorités (religieuses, ethniques, sexuelles) occasionne bien plus que de simples troubles à l'ordre public. Il est également question de l'impact d'Instagram sur la santé mentale de jeunes gens fragiles et présentant des troubles de l'alimentation.

Par-delà le like et la colère.

Ce que montre Frances Haugen ce ne sont pas "juste" ces faits mais c'est le fait que la plateforme savait. C'est que Facebook, par exemple sur les questions de modération, non seulement avait délibérément mis en place ces régimes arbitraires et discrétionnaires mais mentait à chaque fois qu'on l'interrogeait sur ce sujet. C'est que démontre Frances Haugen c'est le fait que tout cela, toutes ces incidences délétères et parfois mortifères ou criminogènes sur les discours publics et les expressions privées, ne sont pas le fait d'un algorithme devenu fou ou d'une intelligence artificielle hors de contrôle, mais le résultat des choix consciemment effectués par la plateforme et ses ingénieurs pour maximiser sa rentabilité économique au détriment de tout le reste. Ce que démontre enfin Frances Haugen c'est que même lorsque des employés de la firme faisaient remonter l'évidence de ces problèmes et les solutions pour les traiter ou les corriger, Zuckerberg refusait de les mettre en place.

Mensonge, cynisme et dissimulation, voilà l'envers de la devise de la firme dans les années de sa pleine expansion : "Move fast and break things." Le mouvement fut en effet rapide. Et beaucoup de choses se brisèrent.

Touché … coulé ?

On ignore si Facebook se relèvera de tous ces scandales accumulés mais on peut le supposer. D'autres firmes monopolistiques ou oligopolistiques ont déjà fait face à de semblables crises réputationnelles et s'en sont à chaque fois remises, de Microsoft à Google en passant par Amazon ou même Apple pour ne citer que les autres GAFAM. Les résultats financiers continuent d'être présentés à la hausse, y compris ceux qui ont suivi les révélations de Frances Haugen, et l'on n'observe pas de fuite ou d'exode massif ou même significatif des utilisateurs de la plateforme. Dès lors pourquoi changer quand il suffit de faire le dos rond, de laisser passer l'orage, et d'accepter de se présenter avec la mine contrite lors d'auditions devant les élus des nations tout en jurant que l'on va s'efforcer de corriger tout cela en ajoutant encore plus "d'intelligence artificielle et d'algorithmes" alors que le problème ne vient ni de l'intelligence artificielle ni des algorithmes qui ne commettent que les erreurs ou les fautes permises par leur programmation initiale ; programmation initiale que l'on établit pour qu'elle remplisse les objectifs de rentabilité attentionnelle et interactionnelle qui permettent à la firme de faire tourner sa machine à cash, avec le plus parfait mépris pour l'équilibre du débat public.

Comme pour les révélations de Frances Haugen, à chaque fois que la démonstration est faite des problèmes posés par l'automatisation sur la plateforme au travers de ses algorithmes ou de ses technologies "d'intelligence artificielle", Zuckerberg se borne à répondre qu'il a compris, parfois qu'il est désolé, et qu'il va donc … rajouter des algorithmes et de l'intelligence artificielle.

Pourtant, beaucoup de solutions qui paraissaient hier encore totalement farfelues sont aujourd'hui installées dans le champ du débat public et politique pour régler ces problèmes : une nationalisation (qui est l'occasion de rappeler que toutes ces sociétés reposent sur un essentiel de technologies et d'infrastructures publiques), un démantèlement au nom des lois antitrust, et des régulations coordonnées (en Europe notamment) bien plus coercitives – Mark Zuckerberg réclamant lui-même aux états davantage de régulation … d'internet.

Mais rien ne sera possible ou résolu tant que trois points, encore plus essentiels, ne seront pas définitivement réglés. Ces trois points, les voici.

Ouvrir, ralentir, et vérifier.

D'abord il faut ouvrir, il faut mettre en délibéré public, la partie du code algorithmique qui relève de logiques d'éditorialisation classiques. Et cela peut être fait sans jamais porter atteinte au secret commercial ou industriel de la firme. On sait ainsi, quel est le principe clé de l'algorithme principal du moteur de recherche Google (le Pagerank dont la formule est exposée dans l'article "The Anatomy of a Large-Scale Hypertextual Web Search Engine" publié en 1998 par les deux fondateurs du moteur de recherche). Il est anormal et inquiétant qu'il soit à ce point difficile et souvent impossible de faire de la rétro-ingénierie sur la manière dont fonctionne le média social qui conditionne pour partie les sociabilités et l'accès à l'information de 2,8 milliards d'êtres humains. Pour prendre une image dans une autre industrie, si personne ne connaît la recette précise du Coca-Cola, chacun sait aujourd'hui quelle est la teneur en sucres de cette boisson grâce à des analyses indépendantes (personne n'imagine que seule la firme Coca-Cola pourrait nous fournir sa teneur en sucre et que nous soyons contraints de la croire … sur parole). La teneur en sucre du Coca-Cola c'est un peu la part donnée à la colère sur Facebook : il est tout à fait anormal et dangereux qu'il faille attendre la fuite de documents internes par une lanceuse d'alerte pour découvrir que la colère vaut 5 points et que les autres émotions valent moins. Et il ne s'agit là que d'un tout petit exemple des enjeux éditoriaux qui fondent l'architecture algorithmique de la firme.

Et il faut que cette mise en délibéré se fasse auprès de tiers de confiance (des instances de régulation indépendantes) dont aucun des membres ne peut ni ne doit dépendre de Facebook de quelque manière que ce soit, ni bien sûr être choisi par la firme elle-même comme c'est actuellement le cas du pseudo "conseil de surveillance" (Oversight Board) créé par Facebook en 2018.

Ensuite il faut casser les chaînes de contamination virales qui sont à l'origine de l'essentiel des problèmes de harcèlement, de désinformation, et des discours de haine dans leur globalité. Et là encore le cynisme des plateformes est aussi évident que documenté puisqu'elles ont elles-mêmes fait la démonstration, et à plusieurs reprises, que si par exemple elles diminuaient le nombre de personnes que l'on peut inviter par défaut dans les groupes Whatsapp ou le nombre de conversations et de groupes vers lesquels on peut automatiquement transférer des messages, elles diminuaient aussi considérablement la vitesse de circulation des fake news, notamment en période électorale ; que si elles supprimaient la visibilité de nombre de likes ou de réactions diverses sur un post (et que seul le créateur du post était en mesure de les voir), elles jouaient alors sur les effets souvent délétères de conformité (et de pression) sociale et qu'elles permettaient d'aller vers des logiques de partage bien plus vertueuses car essentiellement qualitatives et non plus uniquement quantitatives ; que si elles se contentaient de demander aux gens s'ils avaient bien lu l'article qu'ils s'apprêtaient à partager avant que de le faire sous le coup de l'émotion, elles diminuaient là encore la circulation de fausses informations de manière tout à fait significative. Il y a encore quelques jours, c'était Youtube qui annonçait supprimer l'affichage public du compteur des "dislikes" pour "protéger" les créateurs notamment de formes de harcèlement, un effet qu'il connaît et documente pourtant depuis déjà de longues années.

Enfin il faut que des chercheurs publics indépendants puissent avoir accès et travailler sans entrave sur les mécanismes de circulation des données et des informations au sein de la plateforme. En Août 2021, Facebook décidait, au nom de la protection de la vie privée (sic), de couper l'accès à ses données à une équipe de chercheurs de l'université de New-York qui travaillait sur le problème des publicités politiques sur la plateforme pour comprendre et documenter qui payait pour leur diffusion mais surtout (ce que Facebook a toujours refusé de rendre public) sur quels critères les personnes visées par ces publicités étaient choisies. Il n'existe absolument aucune étude scientifique indépendante (c'est à dire dont aucun des auteurs ne soit affilié ou directement salarié de Facebook), établie à partir des données anonymisées et/ou randomisées de la firme, sur le coeur du fonctionnement d'un média qui touche mensuellement près de 2,8 milliards d'êtres humains … Ce qui constitue à la fois une aberration démocratique évidente et peut-être le premier de tous les scandales qui touchent cette firme.

Reprenons et résumons.

Il faut ouvrir et mettre en délibéré public la partie du code algorithmique qui relève de logiques d'éditorialisation classiques pour permettre et surtout pour garantir une forme vitale d'intégrité civique.

Il faut casser les chaînes de contamination virales qui sont à l'origine de l'essentiel des problèmes de harcèlement, de désinformation, et des discours de haine dans leur globalité. C'est la seule manière de limiter l'impact des interactions et engagements artificiels, toxiques et non nécessaires.

Il faut permettre à des chercheurs publics indépendants de pouvoir travailler sans entrave sur les mécanismes de circulation des données et des informations au sein de la plateforme. C'est tout simplement une question d'éthique, notamment sur les enjeux des mécanismes et des technologies d'intelligence artificielle qui structurent cette firme.

Intégrité civique ? Tiens donc, c'est aussi le nom de l'équipe de Facebook dont était membre … Frances Haugen. "Civic integrity"

Engagements artificiels et toxiques ? Tiens donc, c'est aussi le nom de l'équipe de Facebook dont était membre Sophie Zhang avant de se faire licencier pour avoir découvert que des réseaux de manipulation politique abusive et de harcèlement de partis d'opposition utilisaient Facebook de manière coordonnée dans une trentaine de pays, et pour avoir voulu rendre cette information publique. "Fake Engagement".

Éthique et intelligence artificielle ? Tiens donc, c'est aussi le nom de l'équipe de Google dont était membre Timnit Gebru avant de se faire licencier suite à la publication d'un article de recherche où elle démontrait les biais sexistes et racistes présents au coeur des technologies du moteur de recherche. "Ethics in Artificial Intelligence".

Pour savoir ce qui dysfonctionne réellement dans les GAFAM et comment le régler, il suffit de regarder les noms des équipes de recherche d'où sont issues les lanceuses d'alerte récemment licenciées par ces firmes.

One More Thing.

Quelle est vraiment la nature de Facebook qui lui permet d'occuper la préoccupante place qui est la sienne aujourd'hui ? Dans Les Chants de Maldoror, Isidore Ducasse Comte de Lautréamont, parlait d'un jeune homme de 16 ans et 4 mois qui était "beau (…) comme la rencontre fortuite sur une table de dissection d'une machine à coudre et d'un parapluie." Alors que sa plateforme avait exactement le même âge, Zuckerberg déclarait en Février 2020 : "Treat us like something between a Telco and a Newspaper." (traitez-nous comme quelque chose entre un opérateur télécom et un titre de presse).

Facebook c'est aussi cette table de dissection de nos humeurs et de nos comportements, cette rencontre fortuite de la machine à coudre des interactions qui nous tiennent ensemble et nous retiennent isolément, et un parapluie qui nous abrite parfois et nous isole souvent, nous empêchant de voir. Et à force de n'être ni tout à fait un opérateur télécom ni pleinement un titre de presse, Facebook se voudrait finalement insaisissable et donc échappant à la régulation commerciale des premiers comme au respect de la déontologie professionnelle des seconds.

Bien sûr, à lui seul Facebook ne résume ni ne borne l'ensemble des problèmes (ou des solutions) auxquels doivent aujourd'hui faire face nos démocraties. Mais il est une expérience sociale tout à fait inédite portant actuellement sur plus de la moitié de l'humanité connectée. Inédite par le nombre mais inédite également et peut-être essentiellement par le statut de cette expérience menée à la fois in vivo – puisqu'il n'existe aucune forme d'étanchéité entre ce qui se passe et ce dit sur Facebook et en dehors – mais aussi in vitro, puisque chaque message, chaque interaction et chacune de nos données participent à des formes de contrôle structurel qu'elles alimentent en retour et qu'il est à tout moment possible, pour la firme et pour la firme seulement, de les isoler de leur environnement habituel comme autant de composants d'un organisme social ou particulier, à des fins d'analyse et de monétisation. Une expérience sociale à l'image du Cyberespace de Gibson : "une hallucination consensuelle vécue quotidiennement en toute légalité par des dizaines de millions d'opérateurs, dans tous les pays."

C'est cette expérience sociale autant que cette hallucination consensuelle qu'il importe de pouvoir toujours et en tous temps garder sous le contrôle d'une expertise et d'une supervision publique indépendante.


[Disclaimer : cet article "de commande" a été publié il y a un peu plus de 3 mois – 6 Décembre 2021 – dans le magazine AOC Media. Il a donné lieu à une rémunération de son auteur (moi) en échange du maintien d'un "embargo" de 3 mois tout en sachant qu'il était, dès sa publication sur AOC Media accessible gratuitement en échange du dépôt de son adresse mail (dépôt ouvrant droit à 3 articles gratuits par mois).

Permalink
June 24, 2023 at 4:12:36 PM GMT+2
1 / 2
Links per page
  • 20
  • 50
  • 100
130 shaares · Shaarli · The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community · Documentation · Theme : Stack · Font : DINish
Fold Fold all Expand Expand all Are you sure you want to delete this link? Are you sure you want to delete this tag? The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community