Delete Set public Set private Add tags Delete tags
  Add tag   Cancel
  Delete tag   Cancel
  • Cyberpunk is the way
  • My links
  • Tags
  • pics
  • Daily
  • RSS
  • Login
Filter untagged links

Cyberpunk is the way - Lexiconhttps://www.linelo.fr/shaarli/#0WAtsw

  • About This
  • About This

Cyberpunk is the way - Lexicon

A few definitions of the tags used here / Quelques définitions relatives aux tags utilisés ici.

Gig Worker

FR

Un travailleur de plateformes est un travailleur indépendant ou salarié ou opérant sous un autre statut, voire sans statut, vendant un service via une plateforme numérique.


ENG

Gig workers are independent contractors, online platform workers, contract firm workers, on-call workers, and temporary workers. Gig workers enter into formal agreements with on-demand companies to provide services to the company's clients.


Datafication

FR

La datafication est une tendance technologique transformant de nombreux aspects de notre vie en données, considérées comme des informations permettant la création d'une nouvelle forme de valeur. Ce terme a été introduit en 2013 par Kenneth Cukier et Victor Mayer-Schöenberger dans un examen des processus de «big data» dans les entreprises et les sciences sociales, où l’on présente la datafication d’un phénomène comme la transcription quantifiée de celui-ci pour qu’il puisse être étudié.


ENG

Datafication is a technological trend turning many aspects of our life into data which is subsequently transferred into information realised as a new form of value. Kenneth Cukier and Viktor Mayer-Schönberger introduced the term datafication to the broader lexicon in 2013. Up until this time, datafication had been associated with the analysis of representations of our lives captured through data, but not on the present scale. This change was primarily due to the impact of big data and the computational opportunities afforded to predictive analytics.


Enshitification

FR

Le phénomène des plateformes en ligne qui dégradent progressivement la qualité de leurs services, souvent en promouvant des publicités et du contenu sponsorisé, afin d'augmenter leurs profits.


ENG

The phenomenon of online platforms gradually degrading the quality of their services, often by promoting advertisements and sponsored content, in order to increase profits.


Transhumanism

FR

Le transhumanisme est un mouvement culturel et intellectuel international prônant l'usage des sciences et des techniques afin d'améliorer la condition humaine par l'augmentation des capacités physiques et mentales des êtres humains et de supprimer le vieillissement et la mort.


ENG

Transhumanism is a philosophical and intellectual movement which advocates the enhancement of the human condition by developing and making widely available sophisticated technologies that can greatly enhance longevity and cognition.


Technopaganism

FR

Le technopaganisme est la fusion du néopaganisme et des rituels magiques avec les technologies numériques. Il peut s'agir de l'utilisation de la technologie comme simple aide, comme la vidéoconférence par exemple, ou d'une adoration de la technologie elle-même.


ENG

Technopaganism is the merging of neopaganism and magical ritual with digital technologies. This may be through the use of technology merely as an aid, such as video conferencing for example, or it may be a worship of the technology itself.


Permalink
June 20, 2023 at 11:07:01 PM GMT+2

Cyberpunk is the way - Abouthttps://fr.wikipedia.org/wiki/Cyberpunk

  • About This
  • About This

FR


Le cyberpunk (association des mots cybernétique et punk) est un genre de la science-fiction très apparenté à la dystopie et à la hard science-fiction. Il met souvent en scène un futur proche, avec une société technologiquement avancée (notamment pour les technologies de l'information et la cybernétique).

Selon Bruce Sterling, « le courant cyberpunk provient d'un univers où le dingue d'informatique et le rocker se rejoignent, d'un bouillon de culture où les tortillements des chaînes génétiques s'imbriquent. »

Les mondes cyberpunks sont empreints de violence et de pessimisme ; ils sont souvent lugubres, parfois ironiquement grinçants ; les personnages sont des antihéros désabusés, cyniques et cupides.

Le cyberpunk a depuis essaimé ses thématiques dans de nombreux médias, notamment dans la bande dessinée, le cinéma, la musique, les jeux vidéo et les jeux de rôle.

ENG


Cyberpunk is a subgenre of science fiction in a dystopian futuristic setting that tends to focus on a "combination of lowlife and high tech", featuring futuristic technological and scientific achievements, such as artificial intelligence and cybernetics, juxtaposed with societal collapse, dystopia or decay.

Much of cyberpunk is rooted in the New Wave science fiction movement of the 1960s and 1970s, when writers like Philip K. Dick, Michael Moorcock, Roger Zelazny, John Brunner, J. G. Ballard, Philip José Farmer and Harlan Ellison examined the impact of drug culture, technology, and the sexual revolution while avoiding the utopian tendencies of earlier science fiction.

Permalink
June 20, 2023 at 11:02:41 PM GMT+2

Did your car witness a crime? Police may be coming for your Teslahttps://www.sfchronicle.com/crime/article/tesla-sentry-mode-police-evidence-19731000.php

  • Spying
  • Police State
  • Spying
  • Police State

Did your car witness a crime? Bay Area police may be coming for your Tesla — and they might tow it

Tesla cameras show the multiple angles that are recorded around the vehicle. Teslas are now routinely deployed in police investigations if they were parked anywhere near a crime scene — even if the car wasn’t involved.

By Rachel Swan, Reporter Aug 31, 2024

A Canadian tourist was visiting Oakland recently when he had to talk someone out of taking his Tesla from a hotel parking lot.

This was no thief. It was the Oakland Police Department. Turns out, the Tesla may have witnessed a homicide.

In Oakland and beyond, police called to crime scenes are increasingly looking for more than shell casings and fingerprints. They’re scanning for Teslas parked nearby, hoping their unique outward-facing cameras captured key evidence. And, the Chronicle has found, they’re even resorting to obtaining warrants to tow the cars to ensure they don’t lose the video.

The trend offers a window into how mass surveillance — the expansion of cameras as well as license-plate scanners, security doorbells and precise cellphone tracking — is changing crime-fighting. While few cars have camera systems similar to Teslas, that could change rapidly, especially as the technology in vehicles continues to improve.

“We have all these mobile video devices floating around,” said Sgt. Ben Therriault, president of the Richmond Police Officers Association.

Therriault said he and other officers now frequently seek video from bystander Teslas, and usually get the owners’ consent to download it without having to serve a warrant. Still, he said, tows are sometimes necessary, if police can’t locate a Tesla owner and need the video “to pursue all leads.”

“It’s the most drastic thing you could do,” he acknowledged.

In at least three instances in July and August, Oakland police sought to tow a Tesla into evidence to obtain — via a second court order — its stored video. Officers cited the cars’ “Sentry Mode” feature, a system of cameras and sensors that records noise and movement around the vehicle when it is empty and locked, storing it in a USB drive in the glove box.

The case involving the Canadian tourist happened July 1 outside the La Quinta Inn near the Oakland airport. When officers arrived at the parking lot shortly after midnight, they found a man in an RV suffering from gunshot and stab wounds. He was later pronounced dead at Highland Hospital.

Officers also noticed a gray Tesla parked in the stall opposite the RV.

“I know that Tesla vehicles contain external surveillance cameras in order to protect their drivers from theft and/or liability in accidents,” officer Kevin Godchaux wrote in the search warrant affidavit obtained by the Chronicle, noting that the vehicle was perfectly positioned to document what happened.

“Based on this information,” Godchaux wrote, “I respectfully request that a warrant is authorized to seize this vehicle from the La Quinta Inn parking lot so this vehicle’s surveillance footage may be searched via an additional search warrant at a secure location.”

Oakland police officials did not respond to a request for comment, nor did the Tesla owner back in Calgary. A source familiar with the investigation said the owner showed up as crews were loading his car onto a tow truck and intervened. When he volunteered the video, police released his vehicle.

There’s no guarantee that a Tesla will record a crime that occurs near it. That depends on factors including what mode the car is in and whether the system is triggered. But police who view Teslas as rolling surveillance security cameras aren’t taking chances.

“When you have these cars on the roads that are constantly capturing information, even when they’re parked, the police can look to them as a resource,” said Saira Hussain, a staff attorney at the Electronic Frontier Foundation who specializes in government surveillance. “That obviously puts third parties — people who are not involved at all — in the crosshairs of investigations.”

Similar issues have come up with self-driving cars now on the road in San Francisco and other cities, which are also equipped with sophisticated video capability, Hussain noted. But in those cases, police subpoena the tech company — typically Waymo — because it owns the cars and the data. Tesla drivers, by contrast, get served individually because they control their own camera footage.

In recent years, Tesla camera footage has played a variety of roles in police investigations, most commonly offering evidence after crashes but also documenting crimes perpetrated on a car’s owners or identifying a burglar who enters a car. The use of court orders related to crimes that occur near a Tesla appears to be a newer wrinkle.

On July 13 in Oakland, an argument between several people outside a beauty supply shop at 40th Street and Telegraph Avenue escalated when five of them drew guns and began shooting at each other, police said, killing a 27-year-old woman.

Oakland police officer Roland Aguilar obtained a search warrant to tow three vehicles, including a Tesla Model X with Kansas plates, writing in a court affidavit, “This video could provide valuable information relevant to the ongoing investigation.”

Weeks later, two men were charged with murder and a raft of other felonies in connection with the shooting. Probable cause declarations for their arrests referred to “high-definition quality surveillance footage” of the homicide, without specifically mentioning the Tesla. Police had also gathered video from a nearby market, the affidavit said.

Another search warrant affidavit from Oakland police described an incident on Aug. 12, in which the city’s gunfire detection system prompted officers to rush to 13th and Center streets in West Oakland. There, they found a man with a gunshot wound to the head in the back seat of his girlfriend’s Tesla. The girlfriend, also in the car, gave officers a “partial statement,” the affidavit said. Officers took a bloody cell phone she was carrying and allowed her to leave.

Though officers found no weapons inside the Tesla, they towed it as evidence, believing its cameras may have recorded the crime, according to the affidavit. Paramedics drove the victim to Highland Hospital, where he was listed in critical condition. No arrests have been made in the case.

Tesla video could be crucial as well in prosecuting a young man over a homicide that occurred in January in San Jose, said Sean Webby, a spokesperson for the Santa Clara County district attorney.

Webby said the Tesla was “not associated with either the suspect or the victim,” but happened to be parked nearby, he said, when the driver of an Infiniti intentionally ran over an already wounded man and then kept going.

Permalink
September 9, 2024 at 7:33:32 PM GMT+2

Comment la Russie influence les élections en France grâce à une armée de botshttps://www.futura-sciences.com/tech/actualites/guerre-futur-russie-influence-elections-france-grace-armee-bots-114424/

  • War
  • Disinformation War
  • Media Manipulation
  • Social Network
  • PsyOps
  • War
  • Disinformation War
  • Media Manipulation
  • Social Network
  • PsyOps

Comment la Russie influence les élections en France grâce à une armée de bots

L’étude du docteur Chavalarias montre que les campagnes d’influence sur X cherchent à affaiblir le front républicain au profit de l’extrême droite. © Login, Adobe Stock (image générée avec IA)

Une étude d’un chercheur du CNRS révèle les manœuvres de déstabilisation menées par le Kremlin sur les réseaux sociaux pour faire monter l’extrême droite en France. Les narratifs poussés par la Russie suffisent-ils pour manipuler l’opinion des électeurs ?

par Sylvain Biget le 5 juillet 2024 pour Futura-science

« La goutte d'eau creuse la pierre, non par force, mais en tombant souvent. » Cette ancienne devise du KGB reste d'actualité avec un président Poutine élu à vie et qui mise sur le long terme pour influencer par petits pas les élections dans les démocraties occidentales. Le champ de bataille des manœuvres d'ingérence russes se situe sur les réseaux sociaux, et cela ne date pas d'hier. L'idée motrice consiste à affaiblir l'Union européenne et l'Otan en manipulant les opinions des populations des pays membres.

Pour y parvenir, le Kremlin cherche à aider à élire des dirigeants moins hostiles au régime russe. C'est le cas de l'extrême droite française et notamment du RN qui a bénéficié d'un emprunt russe pour ses campagnes et s'est systématiquement opposé ou abstenu dès qu'il s'agissait de soutenir la résistance ukrainienne ou d'infliger des sanctions à la Russie suite à l'invasion de l'Ukraine.

La tactique est connue depuis longtemps, puisque dès les élections présidentielles de 2017, des opérations de piratage des boîtes e-mail de l'équipe de campagne d'Emmanuel Macron avaient été menées pour tenter de discréditer le candidat. La manœuvre était alors soutenue par des armées de bots menant des campagnes intensives d'astrosurfing pour amplifier un narratif favorable à l'élection de Marine Le Pen. Aujourd'hui, une récente étude, menée par le chercheur du CNRS David Chavalarias, analyse les techniques de déstabilisation employées par le Kremlin pour manipuler l'opinion des électeurs afin qu'ils votent pour le RN dans le cadre des élections législatives anticipées.

L'auteur explique que déjà, dès 2023, lors des campagnes pour les élections européennes, des publicités ciblées ont été achetées sur Facebook pour pousser les messages allant à l'encontre des gouvernements, ou le soutien des pays occidentaux à la résistance ukrainienne. Ces campagnes se sont amplifiées à l'approche des élections. Par exemple, de fausses publicités de recrutement de soldats pour l'armée française afin d'aller combattre en Ukraine ont abondamment circulé. Elles visaient à renforcer les propos d'Emmanuel Macron sur l'envoi de troupes en Ukraine pour le faire passer pour un va-t-en-guerre.

La représentation graphique des communautés politiques. En nuances de rouge, les partis liés à la gauche. Ceux de droite sont représentés en bleu. Les filaments représentent les flux d’échanges et de partages de publications entre comptes X. © CNRS

La représentation graphique des communautés politiques. En nuances de rouge, les partis liés à la gauche. Ceux de droite sont représentés en bleu. Les filaments représentent les flux d’échanges et de partages de publications entre comptes X. © CNRS

Un pouvoir de nuisance surestimé ?

Mais c'est sur le réseau X que les manœuvres des bots russes et des propagateurs de narratifs prorusses sont les plus abondantes. Depuis 2016, une armée de faux comptes diffuse régulièrement des concepts reposant sur des mots-clés poussant au clivage. C'est ainsi que le terme « islamo-gauchiste », qui venait de nulle part, a été propulsé et a été repris par des ministres et fait l'objet de débats.

Mais plus globalement, selon le chercheur, le Kremlin a employé trois stratégies : pousser la normalisation de l'extrême droite, faire en sorte que les partis du front républicain ne puissent plus s'entendre et surtout engendrer le rejet des partis modérés par les électeurs afin qu'ils votent à l'extrême droite. Pour y parvenir, les bots russes surfent aussi sur l'actualité et notamment la guerre à Gaza en diffusant des images horribles de l'attaque du Hamas du 7 octobre.

Une façon de faire monter l'islamophobie, pointer du doigt l'antisémitisme de certains partis et augmenter les discours radicaux entre l'extrême droite et l'extrême gauche. Pour accentuer l'effet, des bots faisant la promotion de l'islamisme politique ont été créés. Ces manœuvres, qui sont toujours en cours, visent toujours à amplifier l'adhésion au RN d'un maximum d'électeurs pour le second tour des législatives de ce dimanche.

Si l'analyse du chercheur est solide, reste à savoir si cette stratégie du Kremlin pèse vraiment sur le choix des électeurs. Il n'existe pour l'heure aucune étude aboutie sur les effets concrets auprès du public de ce genre de manipulation. Sans pour autant sous-estimer le pouvoir de ces opérations, rien que la montée en puissance du réseau de médias conservateurs du groupe Bolloré peut expliquer bien des choses.

Permalink
July 7, 2024 at 4:05:53 PM GMT+2

Témoignages. Dans l’enfer des “nettoyeurs” des réseaux sociauxhttps://www.asahi.com/articles/ASS4W4287S4WUTIL01YM.html?iref=pc_ss_date_article

  • Gig Worker
  • Artificial intelligence
  • Social Network
  • Digital Society
  • Censorship
  • Gig Worker
  • Artificial intelligence
  • Social Network
  • Digital Society
  • Censorship

Témoignages. Dans l’enfer des “nettoyeurs” des réseaux sociaux

Alors que les géants du numérique tentent de renforcer le contrôle sur leurs plateformes, les “modérateurs de contenu” sont exposés à d’innombrables posts violents ou haineux dans le cadre leur travail. Le quotidien japonais “Asahi Shimbun” est allé à leur rencontre.

Publié le 27 juin 2024 à 05h00 Shiori Tabuchi, Azusa Ushio

Ces vidéos prolifèrent sur la Toile. Violences, menaces, actes sexuels… Pourtant, ils n’ont que deux ou trois minutes pour décider de les supprimer ou non.

Nous sommes dans un immeuble, dans une ville d’Asie du Sud-Est. Dans une salle, assis en silence devant leur ordinateur, casque sur les oreilles, des modérateurs de contenu, surnommés “nettoyeurs des réseaux sociaux”, suppriment les publications Internet jugées inappropriées.

Parmi eux, un Japonais, employé par un sous-traitant d’un géant du numérique qui exploite un site de partage de vidéos, a accepté de répondre à nos questions, à condition de ne divulguer ni son nom, ni son âge :

“On m’interdit de parler en détail du contenu de mon travail.”

Il travaille en trois-huit avec des équipes constituées par langue pour un salaire mensuel d’environ 200 000 yens [1 200 euros]. Soumis à une stricte confidentialité, il n’a pas le droit d’apporter son smartphone dans la salle, ni même un simple stylo.

Lorsqu’il arrive à son poste, il allume ses deux écrans. Sur l’un d’eux, une vidéo passe en vitesse rapide. L’autre affiche les nombreuses règles de modération à appliquer, un document qui semble faire un millier de pages. Lorsqu’il repère un contenu proscrit, il classe la vidéo dans une catégorie, par exemple “violence”, “porno”, “harcèlement” ou “haine”. Et cherche la règle qu’elle enfreint et copie cette dernière dans le champ des commentaires. “La chose essentielle est de la trouver aussi vite que possible”, explique-t-il.

Lorsqu’il a fini de vérifier une vidéo, la suivante apparaît. Outre les contenus signalés par des utilisateurs, “il y a probablement des publications détectées automatiquement par l’intelligence artificielle (IA), mais je ne sais pas comment elles sont choisies”.

Jeu du chat et de la souris

Si une vidéo montre une personne battue jusqu’au sang ou contient des menaces du genre “Je vais le tuer”, il la supprime immédiatement. En cas de doute, il envoie la vidéo à un service spécialisé. Sur les quelque 80 vidéos qu’il visionne par jour, il en supprime environ trois. Il y en a également une dizaine qu’il trouve difficiles à juger. Il ignore combien il y a de services au total, et qui prend les décisions en définitive. “Je procède de manière mécanique”, confie-t-il.

Il se souvient d’un pic d’activité après l’assassinat par balle de l’ancien Premier ministre Shinzo Abe [en juillet 2022]. Des images de la scène ont été publiées à de nombreuses reprises. “J’effaçais les vidéos non floutées pratiquement les unes après les autres.”

Les règles de modération sont nombreuses et détaillées, et les changements sont annoncés chaque semaine lors de réunions matinales. Est également fournie une base de données rassemblant les mots tabous. À la fin de chaque journée de travail, les modérateurs passent un test visant à évaluer leur connaissance des dernières règles : ceux qui obtiennent un faible score voient leur salaire réduit.

Les vidéos supprimées sont fréquemment republiées, et certains contenus passent entre les mailles du filet. Notre modérateur est conscient des critiques :

“Nous faisons de notre mieux, mais c’est comme le jeu du chat et de la souris. Nous ne pouvons pas effacer toutes les vidéos. Celles qui ne sont pas signalées restent.”

Le géant du numérique qui assure ce service de modération soutenait autrefois qu’il ne faisait que fournir un “lieu” d’expression et n’était pas responsable des contenus publiés. Mais la prolifération des publications nuisibles l’a contraint à réagir et à renforcer sa surveillance.

Le règlement sur les services numériques (Digital Services Act, DSA), adopté par l’Union européenne (UE), oblige aujourd’hui les grandes plateformes Internet à supprimer les publications nuisibles, notamment les contenus discriminatoires et les fausses informations. Si beaucoup sont supprimées automatiquement par l’IA, certaines nécessitent une intervention humaine. Selon les rapports que la Commission européenne a demandé aux géants du numériques de présenter en octobre dernier, Facebook a supprimé en Europe près de 47 millions de contenus contrevenant à la réglementation au cours des cinq mois qui ont suivi la fin avril 2023. Et 2,83 millions d’entre eux, soit 6 %, ont été supprimés par des modérateurs.

“Soldats des réseaux”

Facebook emploie environ 15 000 modérateurs et X environ 2 300. TikTok en compte environ 40 000, chargés notamment de contrôler les vidéos populaires qui dépassent un certain nombre de vues et de supprimer celles qui posent problème.

“Les modérateurs sont les soldats qui œuvrent dans l’ombre des réseaux sociaux”, estime Kauna Malgwi, 30 ans, qui vit aujourd’hui à Abuja, la capitale du Nigeria. Il y a cinq ans, alors qu’elle était une mère célibataire en situation précaire, elle est partie étudier au Kenya. Elle y a accepté ce qui était présenté comme un “poste d’interprète dans un ‘service clientèle’ utilisant le haoussa”, l’une des langues qui comptent le plus grand nombre de locuteurs en Afrique de l’Ouest. En réalité, elle s’est retrouvée modératrice pour Meta, qui exploite Facebook et Instagram. En parallèle à ses études de troisième cycle, pendant environ quatre ans, jusqu’en mars 2023, elle a travaillé neuf heures par jour, cinq jours par semaine, pour la succursale kenyane d’un sous-traitant du géant du numérique américain.

Expérience traumatisante

La première vidéo qu’elle a visionnée montrait un homme chutant du 15e étage d’un immeuble. Devant l’effroyable spectacle du corps s’écrasant au sol, elle a sauté de sa chaise. Elle devait remplir un questionnaire pyramidal énonçant les motifs de suppression du haut vers le bas. Après avoir répondu par la négative à la première question – “Voit-on des corps nus ?” –, elle a coché les cases “Voit-on des viscères ?” et “Voit-on du sang ?”.

Agressions sexuelles sur des enfants en bas âge, exécutions par des groupes extrémistes, suicides par balle… Chaque jour, elle examinait un millier de vidéos, détectées par l’IA ou signalées par des utilisateurs, et avait un maximum de cinquante-cinq secondes par vidéo pour décider de leur suppression ou non.

Elle supprimait également des textes à caractère raciste et d’autres messages de haine contenant des mots spécifiques.

“Il n’y avait pas que les textes. Par exemple, un dessin représentant un Asiatique et un singe côte à côte avec la légende ‘deux frères’ devait être supprimé.”

Elle a même supprimé des contenus publiés en Asie du Sud-Est, à plusieurs milliers de kilomètres de là.

Elle gagnait 60 000 shillings kényans (environ 400 euros) par mois, ce qui correspond au revenu mensuel moyen au Kenya. Mais elle souffrait à la fois d’insomnie et de trouble panique, ce qui l’a conduite plusieurs fois à l’hôpital.

Les accords de confidentialité ne lui ont même pas permis de se confier à sa famille. Ses collègues, les seuls avec lesquels elle pouvait partager ses sentiments, fumaient du cannabis pendant leurs pauses pour échapper à la réalité. Certains ont même avoué envisager le suicide. “C’est certes un travail important de protéger les nombreux utilisateurs de ces institutions que sont devenus les réseaux sociaux, mais quand même…” Aujourd’hui encore, il lui arrive de pleurer en repensant aux images qu’elle a vues.

Permalink
June 27, 2024 at 10:32:53 PM GMT+2

EU Council has withdrawn the vote on Chat Controlhttps://stackdiary.com/eu-council-has-withdrawn-the-vote-on-chat-control/

  • Police State
  • global spying
  • Spying
  • Politics
  • Police State
  • global spying
  • Spying
  • Politics

EU Council has withdrawn the vote on Chat Control

By Alex Ivanovs Published 20/06/2024

The EU Council and its participants have decided to withdraw the vote on the contentious Chat Control plan proposed by Belgium, the current EU President.

According to Netzpolitik (German), “The EU Council did not make a decision on chat control today, as the agenda item was removed due to the lack of a majority, confirmed by Council and member state spokespersons”.

Belgium’s draft law, which was supposed to be adopted as the Council’s negotiating position, was instead postponed indefinitely. Although the Committee of Permanent Representatives meets weekly, Belgium cannot currently present a proposal that would gain a majority. In July, the Council Presidency will transfer from Belgium to Hungary, which has stated its intention to advance negotiations on chat control as part of its work program.

At the start of 2022, the European Commission proposed monitoring all chat messages and other forms of digital communication among citizens. This initiative includes client-side scanning for end-to-end encrypted services, meaning all messages would be checked irrespective of suspicion.

The plan targets the detection of both known and unknown abusive material and grooming activities. Experts have cautioned that such measures are prone to generating numerous false positives, particularly when identifying unknown content, leading to innocent citizens being misidentified as senders of abusive material.

European legislation is formed through a trialogue process involving negotiations between the European Commission, the European Parliament, and the Council of Ministers. Initially, the European Parliament rejected the European Commission’s proposal and introduced its own, which, while still critical, excluded end-to-end encrypted services. However, Belgium’s new proposal reintroduced client-side scanning for these services, stipulating that users must consent to chat controls; otherwise, they would lose the ability to send photos, videos, and URLs.

This method, termed “upload moderation” by Belgium, has been criticized by opponents as merely a rebranding of the original concept.

Signal and other apps threaten to leave the EU if the proposal is enacted as law

Meredith Whittaker, president of the chat app Signal, has been vocal against these plans. She argues that implementing such measures within end-to-end encrypted communications fundamentally undermines encryption and introduces significant vulnerabilities in the digital infrastructure.

Whittaker emphasizes that these vulnerabilities have far-reaching global implications, not just within Europe. She has repeatedly highlighted the issue, stating, “There is no way to implement such proposals without fundamentally undermining encryption and introducing dangerous vulnerabilities.”

On June 17, Whittaker published an official position condemning the EU’s proposed “upload moderation” as a rebranding of client-side scanning that fundamentally undermines end-to-end encryption.

She emphasized that despite attempts to mask the dangers through marketing, these measures expose encrypted communications to mass surveillance, creating vulnerabilities exploitable by hackers and hostile nations. Whittaker urged a cessation of such rhetorical games, reiterating that any form of mandated mass scanning compromises encryption, thereby threatening global security and privacy at a critically unstable geopolitical moment.

The privacy messenger Threema published a blog post saying the EU’s proposed Chat Control bill represents a dangerous mass surveillance initiative that would undermine data security, violate privacy rights, and negatively impact professionals and minors.

Patrick Breyer, the outgoing MEP from the Pirate Party, raised concerns, noting that proponents of chat control have leveraged the period following the European elections, when attention is lower and the European Parliament is in transition, to advance their agenda. Breyer has called on European citizens to take action and urge their politicians to oppose the measures.

Edward Snowden, the NSA whistleblower, criticized the proposal, stating, “EU apparatchiks are trying to legislate a terrible mass surveillance measure, despite universal public opposition (no sane person wants this), by inventing a new word for it – upload moderation – and hoping no one finds out what it is until it’s too late.”

What happens next?

With the EU Council withdrawing the vote on the Chat Control proposal today, the legislative process faces new uncertainty. The proposal will return to the drawing board, as the European Commission[1] and the European Parliament continue to deliberate on the best way forward.

The discussions will resume after the summer, once the new Parliament is seated and Hungary assumes the Council presidency from Belgium in July. Hungary has already committed to developing a comprehensive legislative framework to prevent and combat online child sexual abuse and revising the directive against the sexual exploitation of children.

The forthcoming negotiations are anticipated to be highly contentious, especially since the European Parliament has firmly opposed any measures that would circumvent end-to-end encryption. The Member States and the Parliament have until April 2026 to agree. This deadline is crucial, as an existing exemption allowing social networks to self-moderate content will expire, potentially eliminating current safeguards against sharing sensitive images.

In the meantime, privacy advocates and digital rights organizations will likely continue to voice their concerns, urging EU citizens to remain vigilant and engaged in the debate over digital privacy and surveillance. The next steps will involve intense negotiations and potential revisions to address the complex issues at stake.

[footnote #1]: On June 20, at the European Data Protection Supervisor (EDPS) 20th anniversary summit, EU Commissioner for Justice Vera Jourová stated that the European Commission’s proposal for the Child Sexual Abuse Regulation (CSAR) would break encryption. This marks the first time the European Commission has publicly acknowledged that the CSAR proposal would compromise encryption, a significant departure from the stance maintained over the past three years by Home Affairs Commissioner Ylva Johansson, who consistently claimed that the proposal would not affect encryption.

Permalink
June 25, 2024 at 8:46:54 PM GMT+2

Deluge of ‘pink slime’ websites threaten to drown out truth with fake news in US election | US elections 2024 | The Guardianhttps://www.theguardian.com/us-news/article/2024/jun/20/fake-news-websites-us-election

  • Politics
  • Artificial intelligence
  • Disinformation War
  • Fakeness
  • Politics
  • Artificial intelligence
  • Disinformation War
  • Fakeness

Deluge of ‘pink slime’ websites threaten to drown out truth with fake news in US election

US sites pushing misinformation are proliferating, aiming to look like reliable sources as local newspapers close down

Eric Berger Thu 20 Jun 2024 12.00 CEST

Political groups on the right and left are using fake news websites designed to look like reliable sources of information to fill the void left by the demise of local newspapers, raising fears of the impact that they might have during the United States’ bitterly fought 2024 election.

Some media experts are concerned that the so-called pink slime websites, often funded domestically, could prove at least as harmful to political discourse and voters’ faith in media and democracy as foreign disinformation efforts in the 2016 and 2020 presidential elections.

According to a recent report from NewsGuard, a company that aims to counter misinformation by studying and rating news websites, the websites are so prolific that “the odds are now better than 50-50 that if you see a news website purporting to cover local news, it’s fake.”

NewsGuard estimates that there are a staggering 1,265 such fake local news websites in the US – 4% more than the websites of 1,213 daily newspapers left operating in the country.

“Actors on both sides of the political spectrum” feel “that what they are doing isn’t bad because all media is really biased against their side or that that they know actors on the other side are using these tactics and so they feel they need to,” said Matt Skibinski, general manager of NewsGuard, which determined that such sites now outnumber legitimate local news organizations. “It’s definitely contributed to partisanship and the erosion of trust in media; it’s also a symptom of those things.”

Pink slime websites, named after a meat byproduct, started at least as early as 2004 when Brian Timpone, a former television reporter who described himself as a “biased guy” and a Republican, started funding websites featuring names of cities, towns and regions like the Philly Leader and the South Alabama Times.

Timpone’s company, Metric Media, now operates more than 1,000 such websites and his private equity company receives funding from conservative political action committees, according to NewsGuard.

The Leader recently ran a story with the headline, “Rep Evans votes to count illegal aliens towards seats in Congress.”

In actuality, Representative Dwight Evans, a Democrat, did not vote to start counting undocumented immigrants in the 2030 census but rather against legislation that would have changed the way the country has conducted apportionment since 1790.

That sort of story is “standard practice for these outlets”, according to Tim Franklin, who leads Northwestern University’s Local News Initiative, which researches the industry.

“They will take something that maybe has just a morsel of truth to it and then twist it with their own partisan or ideological spin,” Franklin said. “They also tend to do it on issues like immigration or hot-button topics that they think will elicit an emotional response.”

A story published this month on the NW Arkansas News site had a headline on the front page that reported that the unemployment rate in 2021 in Madison county was 5.1% – even though there is much more recent data available. In April 2024, the local unemployment rate was 2.5%.

“Another tactic that we have seen across many of this category of sites is taking a news story that happened at some point and presenting it as if it just happened now, in a way that is misleading,” Skibinski said.

The left has also created websites designed to look like legitimate news organizations but actually shaped by Democratic supporters.

The liberal Courier Newsroom network operates websites in Arizona, Florida, Iowa, Michigan and Nevada, among other states, that – like the conservative pink slime sites – have innocuous sounding names like the Copper Courier and Up North News. The Courier has runs stories like “Gov Ducey Is Now the Most Unpopular Governor in America,” referring to Doug Ducy, the former Republican Arizona governor.

“In contrast, coverage of Democrats, including US President Joe Biden, Democratic Arizona Gov Katie Hobbs, and US Sen Mark Kelly of Arizona, is nearly always laudatory,” NewsGuard stated in a report about Courier coverage.

Tara McGowan, a Democratic strategist who founded the Courier Newsroom has received funding from liberal donors like Reid Hoffman and George Soros, as well as groups associated with political action committees, according to NewsGuard.

“There are pink slime operations on both the right and the left. To me, the key is disclosure and transparency about ownership,” said Franklin.

In a statement, a spokesperson for the Courier said comparisons between its operations and rightwing pink slime groups were unfair and criticized NewsGuard’s methodology in comparing the two.

“Courier publishes award-winning, factual local news by talented journalists who live in the communities we cover, and our reporting is often cited by legacy media outlets. This is in stark contrast to the pink slime networks that pretend to have a local presence but crank out low-quality fake news with no bylines and no accountability. Courier is proudly transparent about our pro-democracy values, and we carry on the respected American tradition of advocacy journalism,” the spokesperson said.

While both the left and the right have invested in the pink slime websites, there are differences in the owners’ approaches, according to Skibinski.

The right-wing networks have created more sites “that are probably getting less attention per site, and on the left, there is a smaller number of sites, but they are more strategic about getting attention to those sites on Facebook and elsewhere”, Skibinski said. “I don’t know that we can quantify whether one is more impactful than the other.”

Artificial intelligence could also help site operators quickly generate stories and create fake images.

“The technology underlying artificial intelligence is now becoming more accessible to malign actors,” said Kathleen Hall Jamieson, a University of Pennsylvania communications professor and director of the Annenberg Public Policy Center, which publishes Factcheck.org. “The capacity to create false images is very high, but also there is a capacity to detect the images that is emerging very rapidly. The question is, will it emerge rapidly with enough capacity?”

Still, it’s not clear whether these websites are effective. Stanford University reported in a 2023 study that engagement with pink slime websites was “relatively low” and little evidence that living “in a news desert made people more likely to consume pink slime”.

The Philly Leader and the NW Arkansas News both only have links to Facebook accounts on their websites and have less than 450 followers on each. Meanwhile, the Copper Courier and Up North News have accounts on all the major platforms and a total of about 150,000 followers on Facebook.

Franklin said he thinks that a lot of people don’t actually click links on social media posts to visit the website.

“The goal of some of these operators is not to get traffic directly to their site, but it’s to go viral on social media,” he said.

Republican lawmakers and leaders of the conservative news sites the Daily Wire and the Federalist have also filed a lawsuit and launched investigations accusing NewsGuard of helping the federal government censor right-leaning media. The defense department hired the company strictly to counter “disinformation efforts by Russian, Chinese and Iranian government-linked operations targeting Americans and our allies”, Gordon Crovitz, the former Wall Street Journal publisher who co-founded NewsGuard, told the Hill in response to a House oversight committee investigation. “We look forward to clarifying the misunderstanding by the committee about our work for the Defense Department.”

To counter the flood of misinformation, social media companies must take a more active role in monitoring such content, according to Franklin and Skibinski.

“The biggest solution to this kind of site would be for the social media platforms to take more responsibility in terms of showing context to the user about sources that could be their own context. It could be data from third parties, like what we do,” said Skibinski.

Franklin would like to see a national media literacy campaign. States around the country have passed laws requiring such education in schools.

Franklin also hopes that legitimate local news could rebound. The MacArthur Foundation and other donors last year pledged $500m to help local outlets.

“I actually have more optimism now than I had a few years ago,” Franklin said. “We’re in the midst of historic changes in how people consume news and how it’s produced and how it’s distributed and how it’s paid for, but I think there’s still demand for local news, and that’s kind of where it all starts.”

Permalink
June 25, 2024 at 8:43:55 PM GMT+2

Scientists Are Getting Eerily Good at Using WiFi to 'See' People Through Walls in Detailhttps://www.vice.com/en/article/y3p7xj/scientists-are-getting-eerily-good-at-using-wifi-to-see-people-through-walls-in-detail

  • Artificial intelligence
  • global spying
  • Artificial intelligence
  • global spying

Scientists Are Getting Eerily Good at Using WiFi to 'See' People Through Walls in Detail

The signals from WiFi can be used to map a human body, according to a new paper.

January 17, 2023, 7:50pm

Researchers at Carnegie Mellon University developed a method for detecting the three dimensional shape and movements of human bodies in a room, using only WiFi routers.

To do this, they used DensePose, a system for mapping all of the pixels on the surface of a human body in a photo. DensePose was developed by London-based researchers and Facebook’s AI researchers. From there, according to their recently-uploaded preprint paper published on arXiv, they developed a deep neural network that maps WiFi signals’ phase and amplitude sent and received by routers to coordinates on human bodies.

Researchers have been working on “seeing” people without using cameras or expensive LiDAR hardware for years. In 2013, a team of researchers at MIT found a way to use cell phone signals to see through walls; in 2018, another MIT team used WiFi to detect people in another room and translate their movements to walking stick-figures.

The Carnegie Mellon researchers wrote that they believe WiFi signals “can serve as a ubiquitous substitute” for normal RGB cameras, when it comes to “sensing” people in a room. Using WiFi, they wrote, overcomes obstacles like poor lighting and occlusion that regular camera lenses face.

Interestingly, they position this advancement as progress in privacy rights; “In addition, they protect individuals’ privacy and the required equipment can be bought at a reasonable price,” they wrote. “In fact, most households in developed countries already have WiFi at home, and this technology may be scaled to monitor the well-being of elder people or just identify suspicious behaviors at home.”

They don’t mention what “suspicious behaviors” might include, if this technology ever hits the mainstream market. But considering companies like Amazon are trying to put Ring camera drones inside our houses, it’s easy to imagine how widespread WiFi-enabled human-detection could be a force for good—or yet another exploitation of all of our privacy.

Permalink
June 23, 2024 at 2:39:14 PM GMT+2

DensePose From WiFiDensePose From WiFi - 2301.00250v1.pdfhttps://arxiv.org/pdf/2301.00250

  • Artificial intelligence
  • global spying
  • Artificial intelligence
  • global spying

DensePose From WiFi

Jiaqi Geng, Dong Huang, Fernando De la Torre 31 Dec 2022

Abstract

Advances in computer vision and machine learning techniques have
led to significant development in 2D and 3D human pose estimation
from RGB cameras, LiDAR, and radars. However, human pose esti-
mation from images is adversely affected by occlusion and lighting,
which are common in many scenarios of interest. Radar and LiDAR
technologies, on the other hand, need specialized hardware that is
expensive and power-intensive. Furthermore, placing these sensors
in non-public areas raises significant privacy concerns.

To address these limitations, recent research has explored the use
of WiFi antennas (1D sensors) for body segmentation and key-point
body detection. This paper further expands on the use of the WiFi
signal in combination with deep learning architectures, commonly
used in computer vision, to estimate dense human pose correspon-
dence. We developed a deep neural network that maps the phase
and amplitude of WiFi signals to UV coordinates within 24 human
regions. The results of the study reveal that our model can estimate
the dense pose of multiple subjects, with comparable performance
to image-based approaches, by utilizing WiFi signals as the only
input. This paves the way for low-cost, broadly accessible, and
privacy-preserving algorithms for human sensing.

Densepose

Official website of densepose

Permalink
June 23, 2024 at 2:35:58 PM GMT+2

McDonald's AI Drive-Thru debacle is a warning to us all | Creative Bloqhttps://www.creativebloq.com/design/branding/mcdonalds-ai-drive-thru-debacle-is-a-warning-to-us-all

  • Artificial intelligence
  • Big Corpo
  • Stupid AI
  • Artificial intelligence
  • Big Corpo
  • Stupid AI

McDonald's AI Drive-Thru debacle is a warning to us all

By Joe Foley published 5 hours ago

Did it not test this?

aiS7yrJJvxAhiADhQGh9QT-650-80.jpg

We've mentioned before the risks for brands jumping on the AI bandwagon too quickly. And that extends beyond using AI image generators to any kind of customer-facing application, as McDonald's may have learned from its AI Drive Thru fiasco.

AI technology is advancing rapidly but remains in a state of relative infancy, and in many cases it just isn't good enough yet to implement without causing significant friction. The world's biggest fastfood brand has sensibly decided not to extend the contract on an AI voice recognition service and has told franchisees to remove the tech, but did it not think it should at least test it before it became the subject of viral videos?

Developed by IBM, McDonald's AI ordering system was implemented in over 100 McDonald's locations in the US starting back 2021. It was supposed to use voice recognition to process orders, but customers reported frequent frustrations, including quite spectacular order mixups, from bacon being added to ice cream to orders being hugely inflated.

In one video shared on TikTok with the caption "Fighting with McDonald's robot", the AI interpreted a woman's request for vanilla ice cream and a bottle of water to be an order for a caramel sundae and multiple sachets of ketchup and butter. In another, a customer inadvertently ordered 2,510 McNuggets Meals. That left a human attendant to have to reinput the order, rendering the AI a pointless frustration.

As reported by the trade publication Restaurant Business, McDonald's is removing the tech but remains determined to push forward with voice recognition technology to avoid having to employ humans to do the job of taking orders. The company said in a statement: "While there have been successes to date, we feel there is an opportunity to explore voice ordering solutions more broadly.

"After a thoughtful review, McDonald's has decided to end our current partnership with IBM on AOT (automated order taking) and the technology will be shut off in all restaurants currently testing it no later than 26 July, 2024."

This is far from the first case we've seen of experiments with AI resulting in a customer backlash. Lego is one of several brands to have apologised after using AI imagery. We've also seen branding agencies warn against AI washing, which is a tendency for companies to overstate their AI capabilities in order to make themselves look like part of the zeitgeist.

Permalink
June 21, 2024 at 11:06:45 PM GMT+2

Here lies the internet, murdered by generative AIhttps://www.theintrinsicperspective.com/p/here-lies-the-internet-murdered-by

  • Artificial intelligence
  • Enshitification
  • Artificial intelligence
  • Enshitification

Here lies the internet, murdered by generative AI

Corruption everywhere, even in YouTube's kids content

Erik Hoel Feb 27, 2024

img

Art for The Intrinsic Perspective is by Alexander Naughton

The amount of AI-generated content is beginning to overwhelm the internet. Or maybe a better term is pollute. Pollute its searches, its pages, its feeds, everywhere you look. I’ve been predicting that generative AI would have pernicious effects on our culture since 2019, but now everyone can feel it. Back then I called it the coming “semantic apocalypse.” Well, the semantic apocalypse is here, and you’re being affected by it, even if you don’t know it. A minor personal example: last year I published a nonfiction book, The World Behind the World, and now on Amazon I find this.

img

What, exactly, are these “workbooks” for my book? AI pollution. Synthetic trash heaps floating in the online ocean. The authors aren’t real people, some asshole just fed the manuscript into an AI and didn’t check when it spit out nonsensical summaries. But it doesn’t matter, does it? A poor sod will click on the $9.99 purchase one day, and that’s all that’s needed for this scam to be profitable since the process is now entirely automatable and costs only a few cents. Pretty much all published authors are affected by similar scams, or will be soon.

Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump. Google search? They often lead with fake AI-generated images amid the real things. Post on Twitter? Get replies from bots selling porn. But that’s just the obvious stuff. Look closely at the replies to any trending tweet and you’ll find dozens of AI-written summaries in response, cheery Wikipedia-style repeats of the original post, all just to farm engagement. AI models on Instagram accumulate hundreds of thousands of subscribers and people openly shill their services for creating them. AI musicians fill up YouTube and Spotify. Scientific papers are being AI-generated. AI images mix into historical research. This isn’t mentioning the personal impact too: from now on, every single woman who is a public figure will have to deal with the fact that deepfake porn of her is likely to be made. That’s insane.

And rather than this being pure skullduggery, people and institutions are willing to embrace low-quality AI-generated content, trying to shift the Overton window to make things like this acceptable:

img

source

That’s not hardball capitalism. That’s polluting our culture for your own minor profit. It’s not morally legitimate for the exact same reasons that polluting a river for a competitive edge is not legitimate. Yet name-brand media outlets are embracing generative AI just like SEO-spammers are, for the same reasons.

E.g., investigative work at Futurism caught Sports Illustrated red-handed using AI-generated articles written by fake writers. Meet Drew Ortiz.

img

He doesn’t exist. That face is an AI-generated portrait, which was previously listed for sale on a website. As Futurism describes:

Ortiz isn't the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content…

"At the bottom [of the page] there would be a photo of a person and some fake description of them like, 'oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.' Stuff like that," they continued. "It's just crazy."

This isn’t what everyone feared, which is AI replacing humans by being better—it’s replacing them because AI is so much cheaper. Sports Illustrated was not producing human-quality level content with these methods, but it was still profitable.

The AI authors' writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball "can be a little tricky to get into, especially without an actual ball to practice with."

Sports Illustrated, in a classy move, deleted all the evidence. Drew was replace by Sora Tanaka, bearing a face also listed for sale on the same website with the description of a “joyful asian young-adult female with long brown hair and brown eyes.”

img

Given that even prestigious outlets like The Guardian refuse to put any clear limits on their use of AI, if you notice odd turns of phrase or low-quality articles, the likelihood that they’re written by an AI, or with AI-assistance, is now high.

Sadly, the people affected the most by generative AI are the ones who can’t defend themselves. Because they don’t even know what AI is. Yet we’ve abandoned them to swim in polluted information currents. I’m talking, unfortunately, about toddlers. Because let me introduce you to…

the hell that is AI-generated children’s YouTube content.

YouTube for kids is quickly becoming a stream of synthetic content. Much of it now consists of wooden digital characters interacting in short nonsensical clips without continuity or purpose. Toddlers are forced to sit and watch this runoff because no one is paying attention. And the toddlers themselves can’t discern that characters come and go and that the plots don’t make sense and that it’s all just incoherent dream-slop. The titles don’t match the actual content, and titles that are all the parents likely check, because they grew up in a culture where if a YouTube video said BABY LEARNING VIDEOS and had a million views it was likely okay. Now, some of the nonsense AI-generated videos aimed at toddlers have tens of millions of views.

Here’s a behind-the-scenes video on a single channel that made 1.2 million dollars via AI-generated “educational content” aimed at toddlers.

As the video says:

These kids, when they watch these kind of videos, they watch them over and over and over again.

They aren’t confessing. They’re bragging. And the particular channel they focus on isn’t even the worst offender—at least that channel’s content mostly matches the subheadings and titles, even if the videos are jerky, strange, off-putting, repetitious, clearly inhuman. Other channels, which are also obviously AI-generated, get worse and worse. Here’s a “kid’s education” channel that is AI-generated (took about one minute to find) with 11.7 million subscribers.

They don’t use proper English, and after quickly going through some shapes like the initial video title promises (albeit doing it in a way that makes you feel like you’re going insane) the rest of the video devolves into randomly-generated rote tasks, eerie interactions, more incorrect grammar, and uncanny musical interludes of songs that serve no purpose but to pad the time. It is the creation of an alien mind.

Here’s an example of the next frontier: completely start-to-finish AI-generated music videos for toddlers. Below is a how-to video for these new techniques. The result? Nightmarish parrots with twisted double-beaks and four mutated eyes singing artificial howls from beyond. Click and behold (or don’t, if you want to sleep tonight).

All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume. There’s no other word but dystopian. Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs? We’re conducting this experiment live. For the first time in history developing brains are being fed choppy low-grade and cheaply-produced synthetic data created en masse by generative AI, instead of being fed with real human culture. No one knows the effects, and no one appears to care. Especially not the companies, because…

OpenAI has happily allowed pollution.

Why blame them, specifically? Well, first of all, their massive impact—e.g., most of the kids videos are built from scripts generated by ChatGPT. And more generally, what AI capabilities are considered okay to deploy has long been a standard set by OpenAI. Despite their supposed safety focus, OpenAI failed to foresee that its creations would thoroughly pollute the internet across all platforms and services. You can see this failure in how they assessed potential negative outcomes in the announcement of GPT-2 on their blog, back in 2019. While they did warn that these models could have serious longterm consequences for the information ecosystem, the specifics they were concerned with were things like:

Generate misleading news articles

Impersonate others online

Automate the production of abusive or faked content to post on social media

Automate the production of spam/phishing content

This may sound kind of in line with what’s happened, but if you read further, it becomes clear that what they meant by “faked content” was mainly malicious actors promoting misinformation, or the same shadowy malicious actors using AI to phish for passwords, etc.

These turned out to be only minor concerns compared to AI’s cultural pollution. OpenAI kept talking about “actors” when they should have been talking about “users.” Because it turns out, all AI-generated content is fake! Or it’s all kind of fake. AI-written websites, now sprouting up like an unstoppable invasive species, don’t necessarily have an intent to mislead; it’s just that AI content is low-effort banalities generated for pennies, so you can SEO spam and do all sorts of manipulative games around search to attract eyeballs and ad revenue.

That is, the OpenAI team didn’t stop to think that regular users just generating mounds of AI-generated content on the internet would have very similar negative effects to as if there were a lot of malicious use by intentional bad actors. Because there’s no clear distinction! The fact that OpenAI was both honestly worried about negative effects, and at the same time didn’t predict the enshittification of the internet they spearheaded, should make us extremely worried they will continue to miss the negative downstream effects of their increasingly intelligent models. They failed to foresee the floating mounds of clickbait garbage, the synthetic info-trash cities, all to collect clicks and eyeballs—even from innocent children who don’t know any better. And they won’t do anything to stop it, because…

AI pollution is a tragedy of the commons.

This term, "tragedy of the commons,” originated in the rising environmentalism of the 20th century, and would lead to many of the regulations that keep our cities free of smog and our rivers clean. Garrett Hardin, an ecologist and biologist, coined it in an article in [Science](https://math.uchicago.edu/~shmuel/Modeling/Hardin, Tragedy of the Commons.pdf) in 1968. The article is still instructively relevant. Hardin wrote:

An implicit and almost universal assumption of discussions published in professional and semipopular scientific journals is that the problem under discussion has a technical solution…

He goes on to discuss several problems for which there are no technical solutions, since rational actors will drive the system toward destruction via competition:

The tragedy of the commons develops in this way. Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

One central example of Hardin’s became instrumental to the environmental movement.

… the tragedy of the commons reappears in problems of pollution. Here it is not a question of taking something out of the commons, but of putting something in—sewage, or chemical, radioactive, and heat wastes into water; noxious and dangerous fumes into the air; and distracting and unpleasant advertising signs into the line of sight. The calculations of utility are much the same as before. The rational man finds that his share of the cost of the wastes he discharges into the commons is less than the cost of purifying his wastes before releasing them. Since this is true for everyone, we are locked into a system of "fouling our own nest," so long as we behave only as independent, rational, free-enterprisers.

We are currently fouling our own nests. Since the internet economy runs on eyeballs and clicks the new ability of anyone, anywhere, to easily generate infinite low-quality content via AI is now remorselessly generating tragedy.

The solution, as Hardin noted, isn’t technical. You can’t detect AI outputs reliably anyway (another initial promise that OpenAI abandoned). The companies won’t self regulate, given their massive financial incentives. We need the equivalent of a Clean Air Act: a Clean Internet Act. We can’t just sit by and let human culture end up buried.

Luckily we’re on the cusp of all that incredibly futuristic technology promised by AI. Any day now, our GDP will start to rocket forward. In fact, soon we’ll cure all disease, even aging itself, and have robot butlers and Universal Basic Income and high-definition personalized entertainment. Who cares if toddlers had to watch inhuman runoff for a few billion years of viewing-time to make the future happen? It was all worth it. Right? Let’s wait a little bit longer. If we wait just a little longer utopia will surely come.

Permalink
June 20, 2024 at 11:26:04 PM GMT+2

Why the Internet Isn’t Fun Anymore | The New Yorkerhttps://www.newyorker.com/culture/infinite-scroll/why-the-internet-isnt-fun-anymore

  • Enshitification
  • Social Network
  • Enshitification
  • Social Network

Why the Internet Isn’t Fun Anymore

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over.

By Kyle Chayka October 9, 2023

https://media.newyorker.com/photos/652058958ddd02a889a30583/master/w_2240,c_limit/Chayka_Internet_still.jpg

Lately on X, the platform formerly known as Twitter, my timeline is filled with vapid posts orbiting the same few topics like water whirlpooling down a drain. Last week, for instance, the chatter was dominated by talk of Taylor Swift’s romance with the football player Travis Kelce. If you tried to talk about anything else, the platform’s algorithmic feed seemed to sweep you into irrelevance. Users who pay for Elon Musk’s blue-check verification system now dominate the platform, often with far-right-wing commentary and outright disinformation; Musk rewards these users monetarily based on the engagement that their posts drive, regardless of their veracity. The decay of the system is apparent in the spread of fake news and mislabelled videos related to Hamas’s attack on Israel.

Elsewhere online, things are similarly bleak. Instagram’s feed pushes months-old posts and product ads instead of photos from friends. Google search is cluttered with junky results, and S.E.O. hackers have ruined the trick of adding “Reddit” to searches to find human-generated answers. Meanwhile, Facebook’s parent company, Meta, in its latest bid for relevance, is reportedly developing artificial-intelligence chatbots with various “sassy” personalities that will be added to its apps, including a role-playing D. & D. Dungeon Master based on Snoop Dogg. The prospect of interacting with such a character sounds about as appealing as texting with one of those spam bots that asks you if they have the right number.

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over. The precipitous decline of X is the bellwether for a new era of the Internet that simply feels less fun than it used to be. Remember having fun online? It meant stumbling onto a Web site you’d never imagined existed, receiving a meme you hadn’t already seen regurgitated a dozen times, and maybe even playing a little video game in your browser. These experiences don’t seem as readily available now as they were a decade ago. In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems. When those platforms decay, as Twitter has under Elon Musk, there is no other comparable platform in the ecosystem to replace them. A few alternative sites, including Bluesky and Discord, have sought to absorb disaffected Twitter users. But like sproutlings on the rain-forest floor, blocked by the canopy, online spaces that offer fresh experiences lack much room to grow.

One Twitter friend told me, of the platform’s current condition, “I’ve actually experienced quite a lot of grief over it.” It may seem strange to feel such wistfulness about a site that users habitually referred to as a “hellsite.” But I’ve heard the same from many others who once considered Twitter, for all its shortcomings, a vital social landscape. Some of them still tweet regularly, but their messages are less likely to surface in my Swift-heavy feed. Musk recently tweeted that the company’s algorithm “tries to optimize time spent on X” by, say, boosting reply chains and downplaying links that might send people away from the platform. The new paradigm benefits tech-industry “thread guys,” prompt posts in the “what’s your favorite Marvel movie” vein, and single-topic commentators like Derek Guy, who tweets endlessly about menswear. Algorithmic recommendations make already popular accounts and subjects even more so, shutting out the smaller, more magpie-ish voices that made the old version of Twitter such a lively destination. (Guy, meanwhile, has received so much algorithmic promotion under Musk that he accumulated more than half a million followers.)

The Internet today feels emptier, like an echoing hallway, even as it is filled with more content than ever. It also feels less casually informative. Twitter in its heyday was a source of real-time information, the first place to catch wind of developments that only later were reported in the press. Blog posts and TV news channels aggregated tweets to demonstrate prevailing cultural trends or debates. Today, they do the same with TikTok posts—see the many local-news reports of dangerous and possibly fake “TikTok trends”—but the TikTok feed actively dampens news and political content, in part because its parent company is beholden to the Chinese government’s censorship policies. Instead, the app pushes us to scroll through another dozen videos of cooking demonstrations or funny animals. In the guise of fostering social community and user-generated creativity, it impedes direct interaction and discovery.

According to Eleanor Stern, a TikTok video essayist with nearly a hundred thousand followers, part of the problem is that social media is more hierarchical than it used to be. “There’s this divide that wasn’t there before, between audiences and creators,” Stern said. The platforms that have the most traction with young users today—YouTube, TikTok, and Twitch—function like broadcast stations, with one creator posting a video for her millions of followers; what the followers have to say to one another doesn’t matter the way it did on the old Facebook or Twitter. Social media “used to be more of a place for conversation and reciprocity,” Stern said. Now conversation isn’t strictly necessary, only watching and listening.

Posting on social media might be a less casual act these days, as well, because we’ve seen the ramifications of blurring the border between physical and digital lives. Instagram ushered in the age of self-commodification online—it was the platform of the selfie—but TikTok and Twitch have turbocharged it. Selfies are no longer enough; video-based platforms showcase your body, your speech and mannerisms, and the room you’re in, perhaps even in real time. Everyone is forced to perform the role of an influencer. The barrier to entry is higher and the pressure to conform stronger. It’s no surprise, in this environment, that fewer people take the risk of posting and more settle into roles as passive consumers.

The patterns of life offscreen affect the makeup of the digital world, too. Having fun online was something that we used to do while idling in office jobs: stuck in front of computers all day, we had to find something on our screens to fill the down time. An earlier generation of blogs such as the Awl and Gawker seemed designed for aimless Internet surfing, delivering intermittent gossip, amusing videos, and personal essays curated by editors with quirky and individuated tastes. (When the Awl closed, in 2017, Jia Tolentino lamented the demise of “online freedom and fun.”) Now, in the aftermath of the pandemic, amid ongoing work-from-home policies, office workers are less tethered to their computers, and perhaps thus less inclined to chase likes on social media. They can walk away from their desks and take care of their children, walk their dog, or put their laundry in. This might have a salutary effect on individuals, but it means that fewer Internet-obsessed people are furiously creating posts for the rest of us to consume. The user growth rate of social platforms over all has slowed over the past several years; according to one estimate, it is down to 2.4 per cent in 2023.

That earlier generation of blogs once performed the task of aggregating news and stories from across the Internet. For a while, it seemed as though social-media feeds could fulfill that same function. Now it’s clear that the tech companies have little interest in directing users to material outside of their feeds. According to Axios, the top news and media sites have seen “organic referrals” from social media drop by more than half over the past three years. As of last week, X no longer displays the headlines for articles that users link to. The decline in referral traffic disrupts media business models, further degrading the quality of original content online. The proliferation of cheap, instant A.I.-generated content promises to make the problem worse.

Choire Sicha, the co-founder of the Awl and now an editor at New York, told me that he traces the seeds of social media’s degradation back a decade. “If I had a time machine I’d go back and assassinate 2014,” he said. That was the year of viral phenomena such as Gamergate, when a digital mob of disaffected video-game fans targeted journalists and game developers on social media; Ellen DeGeneres’s selfie with a gaggle of celebrities at the Oscars, which got retweeted millions of times; and the brief, wondrous fame of Alex, a random teen retail worker from Texas who won attention for his boy-next-door appearance. In those events, we can see some of the nascent forces that would solidify in subsequent years: the tyranny of the loudest voices; the entrenchment of traditional fame on new platforms; the looming emptiness of the content that gets most furiously shared and promoted. But at that point they still seemed like exceptions rather than the rule.

I have been trying to recall the times I’ve had fun online unencumbered by anonymous trolling, automated recommendations, or runaway monetization schemes. It was a long time ago, before social networks became the dominant highways of the Internet. What comes to mind is a Web site called Orisinal that hosted games made with Flash, the late interactive animation software that formed a significant part of the kitschy Internet of the two-thousands, before everyone began posting into the same platform content holes. The games on the site were cartoonish, cute, and pastel-colored, involving activities like controlling a rabbit jumping on stars into the sky or helping mice make a cup of tea. Orisinal was there for anyone to stumble upon, without the distraction of follower counts or sponsored content. You could e-mail the site to a friend, but otherwise there was nothing to share. That old version of the Internet is still there, but it’s been eclipsed by the modes of engagement that the social networks have incentivized. Through Reddit, I recently dug up an emulator of all the Orisinal games and quickly got absorbed into one involving assisting deer leaping across a woodland gap. My only reward was a personal high score. But it was more satisfying, and less lonely, than the experience these days on X. ♦

Permalink
June 20, 2024 at 11:16:09 PM GMT+2

L’effondrement de l’information ? | Hubert Guillaudhttps://hubertguillaud.wordpress.com/2024/01/11/leffondrement-de-linformation/

  • Social Network
  • Digital Society
  • Press
  • Enshitification
  • Social Network
  • Digital Society
  • Press
  • Enshitification

L’effondrement de l’information ?

img

Depuis Cambridge Analytica, Trump, le Brexit et le Covid, l’information est devenue un problème pour les réseaux sociaux… Sommés par les autorités d’arbitrer la vérité, la plupart d’entre eux semblent désormais se réfugier en-dehors de l’information, pour devenir des lieux d’accomplissement de soi rétifs à la politique. C’est certainement ce qui explique le recul de l’information dans les flux des utilisateurs, analyse pertinemment Charlie Warzel pour The Atlantic. Comme le déclarait récemment le New York Times : « Les principales plateformes en ligne sont en train de rompre avec l’information ».

Les plateformes de réseaux sociaux ont longtemps influencé la distribution de l’information, par exemple, en poussant les médias à se tourner vers la vidéo, comme l’a fait Facebook en 2015, en surestimant volontairement le temps moyen que les utilisateurs passaient à regarder des vidéos pour pousser les médias à basculer vers la production de contenus vidéos. Aujourd’hui, elles se détournent de l’information pour le divertissement et la publicité. Mais il n’y a pas qu’elles, les lecteurs eux-mêmes semblent atteindre un plafond informationnel, qui les pousse à se détourner de l’info, rapporte le Pew Research Center. La consommation d’information, particulièrement anxiogène, a plongé depuis 2020. Beaucoup se sont tournés vers des contenus plus faciles, comme ceux produits par les influenceurs. “La confiance des consommateurs ne repose pas nécessairement sur la qualité du reportage ou sur le prestige et l’histoire de la marque, mais sur des relations parasociales fortes”, constate Warzel. En 2014 – l’époque faste de l’actualité sociale – 75 % des adultes américains interrogés par le Pew déclaraient qu’Internet et les médias sociaux les avaient aidés à se sentir plus informés. Ce n’est plus le cas.

Avec l’accélération algorithmique de l’information dans les réseaux sociaux, les cycles d’actualité sont devenus plus rapides : Twitter est ainsi devenu le rédacteur en chef des sujets les plus chauds que les médias devaient traiter, dans une boucle de renforcement des sujets populaires, à l’image des tweets de Donald Trump que tous les médias commentaient. De 2013 à 2017, l’actualité est devenue l’essence faisant tourner les réseaux sociaux, transformant peu à peu l’information en champ de bataille… Beaucoup d’utilisateurs s’en sont alors détournés. De nouveaux réseaux sociaux ont explosé, à l’image de TikTok et les plus anciens réseaux se sont adaptés, Facebook notamment… Une récente enquête de Morning Consult a montré que « les gens aimaient davantage Facebook maintenant qu’il y avait moins d’actualité ».

Les commentaires sur l’actualité comme l’information ne vont pas entièrement disparaître, estime Warzel, mais les médias viennent de perdre de leur influence culturelle. Pour John Herrman dans le New Yorker, la campagne présidentielle de 2024 aux Etats-Unis risque d’être la première sans médias pour façonner les grands récits politiques. “Les réseaux sociaux ont fait ressortir le pire dans le secteur de l’information, et les informations, à leur tour, ont fait ressortir le pire dans de nombreux réseaux sociaux”. L’alliance entre réseaux sociaux et information a vécu. Reste à savoir ce que le monde de l’influence va produire… dans un monde où la force de l’écrit et la structuration de l’information semblent s’estomper du fait de machines à recommandation qui ne sont plus bâties pour eux.

La fin d’un monde commun
Dans un second article, Warzel revient sur cette disparition de l’information… Pour lui, l’internet est désormais fragmenté par les recommandations sociales qui font que nous ne partageons pas grand-chose de ce que les autres consomment. “La notion même de popularité est sujette à débat” : plus personne ne sait vraiment si telle tendance est aussi virale qu’affichée. Difficultés à comparer les métriques, recommandations opaques, fermeture des sites d’information par les paywalls, chute de la pertinence des informations sur les médias sociaux et envahissement publicitaire… Nous ne comprenons plus ce qu’il se passe en ligne. Vous n’avez probablement jamais vu les vidéos les plus populaires de TikTok de l’année, pas plus que les contenus les plus vus de Facebook ! Et pas grand monde n’avait parlé de l’émission la plus populaire de Netflix, The Night Agent ! D’un côté, les contenus populaires sont plus viraux que jamais, de l’autre ces popularités sont plus cloisonnées que jamais ! Les comparaisons d’audience entre contenus et plateformes deviennent particulièrement complexes à décoder. Par exemple, la polémique récente sur le succès d’audience auprès de jeunes américains d’un discours de Ben Laden n’a pas été aussi virale que beaucoup l’ont dit, comme l’ont démontré le Washington Post ou Ryan Broderick. Un peu comme si nous étions entrés dans un moment de grande confusion sur la viralité, avec des métriques de vues que l’on compare d’une plateforme l’autre, alors que leurs publics et principes d’auto-renforcement sont très différents. Le fait que les plateformes ferment l’accès à leurs métriques et à la recherche n’aide pas à y voir clair, bien sûr. Sans échelle de comparaison, sans moyens pour voir ce qui circule et comment, nous devenons aveugles à tous les phénomènes. Et notamment à l’un d’entre eux : la manipulation de l’information par des puissances étrangères…

img

Ces transformations ne sont pas encore achevées ni digérées qu’une autre se profile, estimait James Vincent pour The Verge : “l’ancien web est en train de mourir et le nouveau web a du mal à naître”. La production de textes, d’images, de vidéos et de sons synthétiques vient parasiter cet écosystème en recomposition. Accessibles directement depuis les moteurs de recherches, les productions de l’IA viennent remplacer le trafic qui menait jusqu’à l’information. “L’IA vise à produire du contenu bon marché depuis le travail d’autrui”. Bing AI ou Bard de Google pourraient finalement venir tuer l’écosystème qui a fait la valeur des moteurs de recherche, en proposant eux-même leur propre “abondance artificielle”. Certes, ce ne sera pas la première fois que l’écosystème de l’information se modifie : Wikipédia a bien tué l’Encyclopédie Britannica. Mais, pour James Vincent, si depuis l’origine le web structure la grande bataille de l’information en modifiant les producteurs, les modalités d’accès et les modèles économiques… cette nouvelle configuration qui s’annonce ne garantit pas que le système qui arrive soit meilleur que celui que nous avions.

“L’internet n’est plus amusant”, déplorait Kyle Chayka pour le New Yorker. A force d’ajustements algorithmiques, les réseaux sociaux sont devenus parfaitement chiants !, expliquait Marie Turcan de Numérama, dénonçant le web de l’ennui ! L’invisibilisation des liens externes et plus encore de l’écrit par rapport à la vidéo, semble achever ce qu’il restait de qualité, comme le rapporte David-Julien Rahmil pour l’ADN. Dans un autre article, Rahmil rappelle que les échanges directs ont pris le pas sur les échanges publics : “La publicité omniprésente, l’exacerbation des tensions politiques, la culture du clash perpétuel et la sensation de burn-out informationnel ont sans doute précipité la chute des grandes plateformes sociales.” Désormais, chaque plateforme ne travaille plus que pour elle-même. Dans une internet plus fragmenté que jamais, chaque plateforme va faire émerger ses propres professionnels, ses propres influenceurs et il est bien probable qu’ils ne se recoupent plus d’une plateforme l’autre.

img

Quant aux réseaux sociaux, ils se sont dévalorisés eux-mêmes, à l’image de Twitter, qui a longtemps incarné le fil d’actualité en temps réel, le lieu central d’une conversation influente et un peu élitiste, explique Nilay Patel pour The Verge. C’est “l’effondrement du contexte qui a rendu Twitter si dangereux et si réducteur, mais c’était aussi ce qui le rendait passionnant”. La plateforme a rendu ses utilisateurs plus rapides et plus agiles, mais également trop réactifs. Les marques se sont éloignées des médias pour gérer elles-mêmes leurs présences sociales. “En prenant du recul maintenant, vous pouvez voir exactement à quel point cette situation a été destructrice pour le journalisme : les journalistes du monde entier ont fourni gratuitement à Twitter des informations et des commentaires en temps réel, apprenant de plus en plus à façonner des histoires pour l’algorithme plutôt que pour leurs véritables lecteurs. Pendant ce temps, les sociétés de médias pour lesquelles ils travaillaient étaient confrontées à un exode de leurs plus gros clients publicitaires vers des plateformes sociales offrant des produits publicitaires de meilleure qualité et plus intégrés, une connexion directe avec le public et aucune éthique éditoriale contraignante. Les informations sont devenues de plus en plus petites, même si les histoires ont pris de l’ampleur.” Tout le monde y était journaliste, alors que le secteur de l’information lui-même se tarissait. “Twitter a été fondé en 2006. Depuis cette année-là, l’emploi dans les journaux a chuté de 70% et les habitants de plus de la moitié des comtés américains ont peu ou plus d’informations locales”. Avec la pandémie, Trump, Black Live Matter, Twitter a atteint un point de bascule, s’effondrant sous son propre pouvoir. L’audience a commencé à refluer sous sa toxicité. Pour Patel, la prise de pouvoir de Musk sur la plateforme est une réaction au recul du pouvoir des célébrités et des gens de la tech. En renforçant sa viralité et sa toxicité, la plateforme ne cesse de péricliter. Les challengers (Bluesky, Threads, Mastodon…) sont à Twitter “ce que la méthadone est à l’héroïne”. L’audience est plus fragmentée que jamais. A l’image de ces utilisateurs qui courent encore d’une plateforme l’autre pour envoyer des messages à leurs relations… ou ces lecteurs désorientés de ne plus trouver quoi lire.

Changement générationel ou enjunkification ?**
**L’âge de la conversation qui ouvrait le web du XXIe siècle est clos ! Et ce qu’il reste de nos conversations vont être prises en charge par des agents conversationnels… qui seront des des agents politiques et idéologiques bien plus efficaces que nos semblables, comme l’explique Olivier Ertzscheid ! A terme, c’est même une relation encore plus personnelle à l’information que dessinent les chatbots, chacun discutant avec le sien sans plus vraiment avoir de liens à des contenus communs.

img

Pour Max Read, dans le New York Times, peut-être faut-il lire ces changements en cours autrement. Ces transformations ont aussi des origines économiques, rappelle-t-il trop rapidement. “La fin de l’ère des taux d’intérêt bas a bouleversé l’économie des start-ups, mettant fin aux pratiques de croissance rapide comme le blitzscaling et réduisant le nombre de nouvelles entreprises Internet en lice pour attirer notre attention ; des entreprises comme Alphabet et Facebook sont désormais des entreprises matures et dominantes au lieu de nouvelles entreprises perturbatrices”… Pourtant, plutôt que de creuser cette explication économique, c’est à une autre explication que Max Read se range. Si l’internet est en train de mourir, c’est d’abord parce que nous vieillissons. La forme et la culture d’internet ont été façonnés par les préférences des générations qui y ont pris part. L’internet d’aujourd’hui n’est plus celui des médias sociaux (2000-2010), ni celui des réseaux sociaux (2010-2020). “Selon le cabinet d’études de consommation GWI, le temps passé devant un écran par les millennials est en baisse constante depuis des années. Seuls 42 % des 30 à 49 ans déclarent être en ligne « presque constamment », contre 49 % des 18 à 29 ans. Nous ne sommes même plus les premiers à l’adopter : les 18 à 29 ans sont plus susceptibles d’avoir utilisé ChatGPT que les 30 à 49 ans – mais peut-être uniquement parce que nous n’avons plus de devoirs à faire.”

“Le public américain le plus engagé sur Internet ne sont plus les millennials mais nos successeurs de la génération Z. Si Internet n’est plus amusant pour les millennials, c’est peut-être simplement parce que ce n’est plus notre Internet. Il appartient désormais aux zoomers.”

Les formats, les célébrités, le langage lui-même de cette génération est totalement différent, explique Read. “Les zoomers et les adolescents de la génération Alpha qui mordillent leurs talons générationnels semblent toujours s’amuser en ligne. Même si je trouve tout cela impénétrable et un peu irritant, l’expression créative et la socialité exubérante qui ont rendu Internet si amusant pour moi il y a dix ans sont en plein essor parmi les jeunes de 20 ans sur TikTok, Instagram, Discord, Twitch et même X. Skibidi Toilet, Taxe Fanum, le rizzler – je ne me rabaisserai pas en prétendant savoir ce que sont ces mèmes, ou quel est leur attrait, mais je sais que les zoomers semblent les aimer. Ou, en tout cas, je peux vérifier qu’ils adorent les utiliser pour confondre et aliéner les millennials d’âge moyen comme moi.”

Certes, ils sont récupérés et exploités par une petite poignée de plateformes puissantes, mais d’autres avant elles ont cherché à arbitrer et à marchandiser notre activité en ligne… “Les plateformes axées sur l’engagement ont toujours cultivé les influenceurs, les abus et la désinformation. Lorsque vous approfondissez, ce qui semble avoir changé sur le Web au cours des dernières années, ce n’est pas la dynamique structurelle mais les signifiants culturels”.

“En d’autres termes, l’enjunkification a toujours eu lieu sur le web commercial, dont le modèle économique largement basé sur la publicité semble imposer une course toujours mouvante vers le bas. Peut-être que ce que les internautes frustrés, aliénés et vieillissants comme moi vivent ici, ce ne sont pas seulement les fruits d’un Internet enjunkifié, mais aussi la perte de l’élasticité cognitive, du sens de l’humour et de l’abondance de temps libre nécessaire pour naviguer avec agilité et gaieté dans tous ces déchets déroutants.”

Mais c’est là une vision très pessimiste des transformations actuelles. Pour Rolling Stone, Anil Dash s’enthousiasme. Avec sa fragmentation, l’internet est en train de redevenir bizarre, comme il l’était à l’origine ! La disparition d’applications centrales (même si ce n’est pas vraiment tout à fait le cas), promet un retour de services étranges et de propositions inattendues à l’image de l’école de la programmation poétique de Neta Bomani… ou celles du constructeur de bots Stephan Bohacek, ou encore celles du designer Elan Kiderman Ullendorff qui s’amuse à faire des propositions pour “échapper aux algorithmes“… ou encore les petites subversions de l’artiste et programmeur Darius Kazemi qui proposait aux gens de créer leurs micro-réseaux sociaux autonomes sur Mastodon…

Pas sûr que ces subversions n’aient jamais cessé. Elles ont surtout été invisibilisées par les grandes plateformes sociales. Pas sûr que l’audience d’influence et que l’audience synthétique qui s’annoncent ne leur apporte plus d’espaces qu’ils n’en avaient hier. Reste qu’Anil Dash a raison : la seule chose certaine, c’est que les contenus les plus étranges vont continuer de tenter de parvenir jusqu’à nous. A l’image des vidéos qui venaient coloniser les flux des plus jeunes depuis quelques mots clefs, que dénonçait James Bridle dans son excellent livre, Un nouvel âge des ténèbres. Elan Kiderman Ullendorff s’est amusé à créer un compte tiktok des vidéos les plus repoussantes qui lui étaient proposées en passant toutes celles qui l’intéressaient et en ne retenant que le pire. Des vidéos qui semblent composer un portrait de Dorian Gray de chacun d’entre nous. Le web addictif est le miroir du web répulsif, le web qu’on déteste le miroir du web de nos rêves. Seule certitude, oui : le web de demain risque d’être bien plus étrange et dérangeant qu’il n’est ! Les ajustements algorithmiques ayant sabré le plus intéressant, il est probable que nous soyons plus que jamais confrontés au pire !

Hubert Guillaud

Permalink
June 20, 2024 at 10:36:54 PM GMT+2

Underage Workers Are Training AI | WIREDhttps://www.wired.com/story/artificial-intelligence-data-labeling-children/

  • Artificial intelligence
  • Gig Worker
  • Big Corpo
  • Artificial intelligence
  • Gig Worker
  • Big Corpo

Underage Workers Are Training AI

Companies that provide Big Tech with AI data-labeling services are inadvertently hiring young teens to work on their platforms, often exposing them to traumatic content.

Underage-Workers-Are-Training-AI-Business

Like most kids his age, 15-year-old Hassan spent a lot of time online. Before the pandemic, he liked playing football with local kids in his hometown of Burewala in the Punjab region of Pakistan. But Covid lockdowns made him something of a recluse, attached to his mobile phone. “I just got out of my room when I had to eat something,” says Hassan, now 18, who asked to be identified under a pseudonym because he was afraid of legal action. But unlike most teenagers, he wasn’t scrolling TikTok or gaming. From his childhood bedroom, the high schooler was working in the global artificial intelligence supply chain, uploading and labeling data to train algorithms for some of the world’s largest AI companies.

The raw data used to train machine-learning algorithms is first labeled by humans, and human verification is also needed to evaluate their accuracy. This data-labeling ranges from the simple—identifying images of street lamps, say, or comparing similar ecommerce products—to the deeply complex, such as content moderation, where workers classify harmful content within data scraped from all corners of the internet. These tasks are often outsourced to gig workers, via online crowdsourcing platforms such as Toloka, which was where Hassan started his career.

A friend put him on to the site, which promised work anytime, from anywhere. He found that an hour’s labor would earn him around $1 to $2, he says, more than the national minimum wage, which was about $0.26 at the time. His mother is a homemaker, and his dad is a mechanical laborer. “You can say I belong to a poor family,” he says. When the pandemic hit, he needed work more than ever. Confined to his home, online and restless, he did some digging, and found that Toloka was just the tip of the iceberg.

“AI is presented as a magical box that can do everything,” says Saiph Savage, director of Northeastern University’s Civic AI Lab. “People just simply don’t know that there are human workers behind the scenes.”

At least some of those human workers are children. Platforms require that workers be over 18, but Hassan simply entered a relative’s details and used a corresponding payment method to bypass the checks—and he wasn’t alone in doing so. WIRED spoke to three other workers in Pakistan and Kenya who said they had also joined platforms as minors, and found evidence that the practice is widespread.

“When I was still in secondary school, so many teens discussed online jobs and how they joined using their parents' ID,” says one worker who joined Appen at 16 in Kenya, who asked to remain anonymous. After school, he and his friends would log on to complete annotation tasks late into the night, often for eight hours or more.

Appen declined to give an attributable comment.

“If we suspect a user has violated the User Agreement, Toloka will perform an identity check and request a photo ID and a photo of the user holding the ID,” Geo Dzhikaev, head of Toloka operations, says.

Driven by a global rush into AI, the global data labeling and collection industry is expected to grow to over $17.1 billion by 2030, according to Grand View Research, a market research and consulting company. Crowdsourcing platforms such as Toloka, Appen, Clickworker, Teemwork.AI, and OneForma connect millions of remote gig workers in the global south to tech companies located in Silicon Valley. Platforms post micro-tasks from their tech clients, which have included Amazon, Microsoft Azure, Salesforce, Google, Nvidia, Boeing, and Adobe. Many platforms also partner with Microsoft’s own data services platform, the Universal Human Relevance System (UHRS).

These workers are predominantly based in East Africa, Venezuela, Pakistan, India, and the Philippines—though there are even workers in refugee camps, who label, evaluate, and generate data. Workers are paid per task, with remuneration ranging from a cent to a few dollars—although the upper end is considered something of a rare gem, workers say. “The nature of the work often feels like digital servitude—but it's a necessity for earning a livelihood,” says Hassan, who also now works for Clickworker and Appen.

Sometimes, workers are asked to upload audio, images, and videos, which contribute to the data sets used to train AI. Workers typically don’t know exactly how their submissions will be processed, but these can be pretty personal: On Clickworker’s worker jobs tab, one task states: “Show us you baby/child! Help to teach AI by taking 5 photos of your baby/child!” for €2 ($2.15). The next says: “Let your minor (aged 13-17) take part in an interesting selfie project!”

Some tasks involve content moderation—helping AI distinguish between innocent content and that which contains violence, hate speech, or adult imagery. Hassan shared screen recordings of tasks available the day he spoke with WIRED. One UHRS task asked him to identify “fuck,” “c**t,” “dick,” and “bitch” from a body of text. For Toloka, he was shown pages upon pages of partially naked bodies, including sexualized images, lingerie ads, an exposed sculpture, and even a nude body from a Renaissance-style painting. The task? Decipher the adult from the benign, to help the algorithm distinguish between salacious and permissible torsos.

Hassan recalls moderating content while under 18 on UHRS that, he says, continues to weigh on his mental health. He says the content was explicit: accounts of rape incidents, lifted from articles quoting court records; hate speech from social media posts; descriptions of murders from articles; sexualized images of minors; naked images of adult women; adult videos of women and girls from YouTube and TikTok.

Many of the remote workers in Pakistan are underage, Hassan says. He conducted a survey of 96 respondents on a Telegram group chat with almost 10,000 UHRS workers, on behalf of WIRED. About a fifth said they were under 18.

Awais, 20, from Lahore, who spoke on condition that his first name not be published, began working for UHRS via Clickworker at 16, after he promised his girlfriend a birthday trip to the turquoise lakes and snow-capped mountains of Pakistan’s northern region. His parents couldn’t help him with the money, so he turned to data work, joining using a friend’s ID card. “It was easy,” he says.

He worked on the site daily, primarily completing Microsoft’s “Generic Scenario Testing Extension” task. This involved testing homepage and search engine accuracy. In other words, did selecting “car deals” on the MSN homepage bring up photos of cars? Did searching “cat” on Bing show feline images? He was earning $1 to $3 each day, but he found the work both monotonous and infuriating. At times he found himself working 10 hours for $1, because he had to do unpaid training to access certain tasks. Even when he passed the training, there might be no task to complete; or if he breached the time limit, they would suspend his account, he says. Then seemingly out of nowhere, he got banned from performing his most lucrative task—something workers say happens regularly. Bans can occur for a host of reasons, such as giving incorrect answers, answering too fast, or giving answers that deviate from the average pattern of other workers. He’d earned $70 in total. It was almost enough to take his high school sweetheart on the trip, so Awais logged off for good.

Clickworker did not respond to requests for comment. Microsoft declined to comment.

“In some instances, once a user finishes the training, the quota of responses has already been met for that project and the task is no longer available,” Dzhikaev said. “However, should other similar tasks become available, they will be able to participate without further training.”

Researchers say they’ve found evidence of underage workers in the AI industry elsewhere in the world. Julian Posada, assistant professor of American Studies at Yale University, who studies human labor and data production in the AI industry, says that he’s met workers in Venezuela who joined platforms as minors.

Bypassing age checks can be pretty simple. The most lenient platforms, like Clickworker and Toloka, simply ask workers to state they are over 18; the most secure, such as Remotasks, employ face recognition technology to match workers to their photo ID. But even that is fallible, says Posada, citing one worker who says he simply held the phone to his grandmother’s face to pass the checks. The sharing of a single account within family units is another way minors access the work, says Posada. He found that in some Venezuelan homes, when parents cook or run errands, children log on to complete tasks. He says that one family of six he met, with children as young as 13, all claimed to share one account. They ran their home like a factory, Posada says, so that two family members were at the computers working on data labeling at any given point. “Their backs would hurt because they have been sitting for so long. So they would take a break, and then the kids would fill in,” he says.

The physical distances between the workers training AI and the tech giants at the other end of the supply chain—“the deterritorialization of the internet,” Posada calls it—creates a situation where whole workforces are essentially invisible, governed by a different set of rules, or by none.

The lack of worker oversight can even prevent clients from knowing if workers are keeping their income. One Clickworker user in India, who requested anonymity to avoid being banned from the site, told WIRED he “employs” 17 UHRS workers in one office, providing them with a computer, mobile, and internet, in exchange for half their income. While his workers are aged between 18 and 20, due to Clickworker’s lack of age certification requirements, he knows of teenagers using the platform.

In the more shadowy corners of the crowdsourcing industry, the use of child workers is overt.

Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) solving services, where crowdsourcing platforms pay humans to solve captchas, are a less understood part in the AI ecosystem. Captchas are designed to distinguish a bot from a human—the most notable example being Google’s reCaptcha, which asks users to identify objects in images to enter a website. The exact purpose of services that pay people to solve them remains a mystery to academics, says Posada. “But what I can confirm is that many companies, including Google's reCaptcha, use these services to train AI models,” he says. “Thus, these workers indirectly contribute to AI advancements.”

Google did not respond to a request for comment in time for publication.

There are at least 152 active services, mostly based in China, with more than half a million people working in the underground reCaptcha market, according to a 2019 study by researchers from Zhejiang University in Hangzhou.

“Stable job for everyone. Everywhere,” one service, Kolotibablo, states on its website. The company has a promotional website dedicated to showcasing its worker testimonials, which includes images of young children from across the world. In one, a smiling Indonesian boy shows his 11th birthday cake to the camera. “I am very happy to be able to increase my savings for the future,” writes another, no older than 7 or 8. A 14-year-old girl in a long Hello Kitty dress shares a photo of her workstation: a laptop on a pink, Barbie-themed desk.

Not every worker WIRED interviewed felt frustrated with the platforms. At 17, most of Younis Hamdeen’s friends were waiting tables. But the Pakistani teen opted to join UHRS via Appen instead, using the platform for three or four hours a day, alongside high school, earning up to $100 a month. Comparing products listed on Amazon was the most profitable task he encountered. “I love working for this platform,” Hamdeen, now 18, says, because he is paid in US dollars—which is rare in Pakistan—and so benefits from favorable exchange rates.

But the fact that the pay for this work is incredibly low compared to the wages of in-house employees of the tech companies, and that the benefits of the work flow one way—from the global south to the global north, leads to uncomfortable parallels. “We do have to consider the type of colonialism that is being promoted with this type of work,” says the Civic AI Lab’s Savage.

Hassan recently got accepted to a bachelor’s program in medical lab technology. The apps remain his sole income, working an 8 am to 6 pm shift, followed by 2 am to 6 am. However, his earnings have fallen to just $100 per month, as demand for tasks has outstripped supply, as more workers have joined since the pandemic.

He laments that UHRS tasks can pay as little as 1 cent. Even on higher-paid jobs, such as occasional social media tasks on Appen, the amount of time he needs to spend doing unpaid research means he needs to work five or six hours to complete an hour of real-time work, all to earn $2, he says.

“It’s digital slavery,” says Hassan.

Permalink
June 20, 2024 at 10:13:53 PM GMT+2

We Need To Rewild The Internethttps://www.noemamag.com/we-need-to-rewild-the-internet/

  • Digital Society
  • Social Network
  • Digital Society
  • Social Network

We Need To Rewild The Internet

The internet has become an extractive and fragile monoculture. But we can revitalize it using lessons learned by ecologists.

By Maria Farrell and Robin Berjon April 16, 2024

“The word for world is forest” — Ursula K. Le Guin

In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.

So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.

Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know [what happened](https://files.libcom.org/files/Seeing Like a State - James C. Scott.pdf) next.

It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered.

The story of German scientific forestry transmits a timeless truth: When we simplify complex systems, we destroy them, and the devastating consequences sometimes aren’t obvious until it’s too late.

That impulse to scour away the messiness that makes life resilient is what many conservation biologists call the “pathology of command and control.” Today, the same drive to centralize, control and extract has driven the internet to the same fate as the ravaged forests.

The internet’s 2010s, its boom years, may have been the first glorious harvest that exhausted a one-time bonanza of diversity. The complex web of human interactions that thrived on the internet’s initial technological diversity is now corralled into globe-spanning data-extraction engines making huge fortunes for a tiny few.

Our online spaces are not ecosystems, though tech firms love that word. They’re plantations; highly concentrated and controlled environments, closer kin to the industrial farming of the cattle feedlot or battery chicken farms that madden the creatures trapped within.

We all know this. We see it each time we reach for our phones. But what most people have missed is how this concentration reaches deep into the internet’s infrastructure — the pipes and protocols, cables and networks, search engines and browsers. These structures determine how we build and use the internet, now and in the future.

They’ve concentrated into a series of near-planetary duopolies. For example, as of April 2024, Google and Apple’s internet browsers have captured almost 85% of the world market share, Microsoft and Apple’s two desktop operating systems over 80%. Google runs 84% of global search and Microsoft 3%. Slightly more than half of all phones come from Apple and Samsung, while over 99% of mobile operating systems run on Google or Apple software. Two cloud computing providers, Amazon Web Services and Microsoft’s Azure [make up](https://www.hava.io/blog/2024-cloud-market-share-analysis-decoding-industry-leaders-and-trends#:~:text=Amazon Web Services (AWS) maintains,in the Asia-Pacific market.) over 50% of the global market. Apple and Google’s email clients manage nearly 90% of global email. Google and Cloudflare serve around 50% of global domain name system requests.

Two kinds of everything may be enough to fill a fictional ark and repopulate a ruined world, but can’t run an open, global “network of networks” where everyone has the same chance to innovate and compete. No wonder internet engineer Leslie Daigle termed the concentration and consolidation of the internet’s technical architecture “‘climate change’ of the Internet ecosystem.”

Walled Gardens Have Deep Roots

The internet made the tech giants possible. Their services have scaled globally, via its open, interoperable core. But for the past decade, they’ve also worked to enclose the varied, competing and often open-source or collectively provided services the internet is built on into their proprietary domains. Although this improves their operational efficiency, it also ensures that the flourishing conditions of their own emergence aren’t repeated by potential competitors. For tech giants, the long period of open internet evolution is over. Their internet is not an ecosystem. It’s a zoo.

Google, Amazon, Microsoft and Meta are consolidating their control deep into the underlying infrastructure through acquisitions, vertical integration, building proprietary networks, creating chokepoints and concentrating functions from different technical layers into a single silo of top-down control. They can afford to, using the vast wealth reaped in their one-off harvest of collective, global wealth.

​ “That impulse to scour away the messiness that makes life resilient is what many conservation biologists call the ‘pathology of command and control.’”

Taken together, the enclosure of infrastructure and imposition of technology monoculture forecloses our futures. Internet people like to talk about “the stack,” or the layered architecture of protocols, software and hardware, operated by different service providers that collectively delivers the daily miracle of connection. It’s a complicated, dynamic system with a basic value baked into the core design: Key functions are kept separate to ensure resilience, generality and create room for innovation.

Initially funded by the U.S. military and designed by academic researchers to function in wartime, the internet evolved to work anywhere, in any condition, operated by anyone who wanted to connect. But what was a dynamic, ever-evolving game of Tetris with distinct “players” and “layers” is today hardening into a continent-spanning system of compacted tectonic plates. Infrastructure is not just what we see on the surface; it’s the forces below, that make mountains and power tsunamis. Whoever controls infrastructure determines the future. If you doubt that, consider that in Europe we’re still using roads and living in towns and cities the Roman Empire mapped out 2,000 years ago.

In 2019, some internet engineers in the global standards-setting body, the Internet Engineering Task Force, raised the alarm. Daigle, a respected engineer who had previously chaired its oversight committee and internet architecture board, wrote in a policy brief that consolidation meant network structures were ossifying throughout the stack, making incumbents harder to dislodge and violating a core principle of the internet: that it does not create “permanent favorites.” Consolidation doesn’t just squeeze out competition. It narrows the kinds of relationships possible between operators of different services.

As Daigle put it: “The more proprietary solutions are built and deployed instead of collaborative open standards-based ones, the less the internet survives as a platform for future innovation.” Consolidation kills collaboration between service providers through the stack by rearranging an array of different relationships — competitive, collaborative — into a single predatory one.

Since then, standards development organizations started several initiatives to name and tackle infrastructure consolidation, but these floundered. Bogged down in technical minutiae, unable to separate themselves from their employers’ interests and deeply held professional values of simplification and control, most internet engineers simply couldn’t see the forest for the trees.

Up close, internet concentration seems too intricate to untangle; from far away, it seems too difficult to deal with. But what if we thought of the internet not as a doomsday “hyperobject,” but as a damaged and struggling ecosystem facing destruction? What if we looked at it not with helpless horror at the eldritch encroachment of its current controllers, but with compassion, constructiveness and hope?

Technologists are great at incremental fixes, but to regenerate entire habitats, we need to learn from ecologists who take a whole-systems view. Ecologists also know how to keep going when others first ignore you and then say it’s too late, how to mobilize and work collectively, and how to build pockets of diversity and resilience that will outlast them, creating possibilities for an abundant future they can imagine but never control. We don’t need to repair the internet’s infrastructure. We need to rewild it.

What Is Rewilding?

Rewilding “aims to restore healthy ecosystems by creating wild, biodiverse spaces,” according to the International Union for Conservation of Nature. More ambitious and risk-tolerant than traditional conservation, it targets entire ecosystems to make space for complex food webs and the emergence of unexpected interspecies relations. It’s less interested in saving specific endangered species. Individual species are just ecosystem components, and focusing on components loses sight of the whole. Ecosystems flourish through multiple points of contact between their many elements, just like computer networks. And like in computer networks, ecosystem interactions are multifaceted and generative.

Rewilding has much to offer people who care about the internet. As Paul Jepson and Cain Blythe wrote in their book “Rewilding: The Radical New Science of Ecological Recovery,” rewilding pays attention “to the emergent properties of interactions between ‘things’ in ecosystems … a move from linear to systems thinking.”

It’s a fundamentally cheerful and workmanlike approach to what can seem insoluble. It doesn’t micromanage. It creates room for “ecological processes [that] foster complex and self-organizing ecosystems.” Rewilding puts into practice what every good manager knows: Hire the best people you can, provide what they need to thrive, then get out of the way. It’s the opposite of command and control.

​ “The complex web of human interactions that thrived on the internet’s initial technological diversity is now corralled into globe-spanning data-extraction engines making huge fortunes for a tiny few.”

Rewilding the internet is more than a metaphor. It’s a framework and plan. It gives us fresh eyes for the wicked problem of extraction and control, and new means and allies to fix it. It recognizes that ending internet monopolies isn’t just an intellectual problem. It’s an emotional one. It answers questions like: How do we keep going when the monopolies have more money and power? How do we act collectively when they suborn our community spaces, funding and networks? And how do we communicate to our allies what fixing it will look and feel like?

Rewilding is a positive vision for the networks we want to live inside, and a shared story for how we get there. It grafts a new tree onto technology’s tired old stock.

What Ecology Knows

Ecology knows plenty about complex systems that technologists can benefit from. First, it knows that shifting baselines are real.

If you were born around the 1970s, you probably remember many more dead insects on the windscreen of your parents’ car than on your own. Global land-dwelling insect populations are dropping about 9% a decade. If you’re a geek, you probably programmed your own computer to make basic games. You certainly remember a web with more to read than the same five websites. You may have even written your own blog.

But many people born after 2000 probably think a world with few insects, little ambient noise from birdcalls, where you regularly use only a few social media and messaging apps (rather than a whole web) is normal. As Jepson and Blythe wrote, shifting baselines are “where each generation assumes the nature they experienced in their youth to be normal and unwittingly accepts the declines and damage of the generations before.” Damage is already baked in. It even seems natural.

Ecology knows that shifting baselines dampen collective urgency and deepen generational divides. People who care about internet monoculture and control are often told they’re nostalgists harkening back to a pioneer era. It’s fiendishly hard to regenerate an open and competitive infrastructure for younger generations who’ve been raised to assume that two or three platforms, two app stores, two operating systems, two browsers, one cloud/mega-store and a single search engine for the world comprise the internet. If the internet for you is the massive sky-scraping silo you happen to live inside and the only thing you can see outside is the single, other massive sky-scraping silo, then how can you imagine anything else?

Concentrated digital power produces the same symptoms that command and control produces in biological ecosystems; acute distress punctuated by sudden collapses once tipping points are reached. What scale is needed for rewilding to succeed? It’s one thing to reintroduce wolves to the 3,472 square miles of Yellowstone, and quite another to cordon off about 20 square miles of a polder (land reclaimed from a body of water) known as Oostvaardersplassen near Amsterdam. Large and diverse Yellowstone is likely complex enough to adapt to change, but Oostvaardersplassen has struggled.

​ “Our online spaces are not ecosystems, though tech firms love that word. They’re plantations; highly concentrated and controlled environments … that madden the creatures trapped within.”

In the 1980s, the Dutch government attempted to regenerate a section of the overgrown Oostvaardersplassen. An independent-minded government ecologist, Frans Vera, said reeds and scrub would dominate unless now-extinct herbivores grazed them. In place of ancient aurochs, the state forest management agency introduced the famously bad-tempered German Heck cattle and in place of an extinct steppe pony, a Polish semi-feral breed.

Some 30 years on, with no natural predators, and after plans for a wildlife corridor to another reserve came to nothing, there were many more animals than the limited winter vegetation could sustain. People were horrified by starving cows and ponies, and beginning in 2018, government agencies instituted animal welfare checks and culling.

Just turning the clock back was insufficient. The segment of Oostvaardersplassen was too small and too disconnected to be rewilded. Because the animals had nowhere else to go, overgrazing and collapse was inevitable, an embarrassing but necessary lesson. Rewilding is a work in progress. It’s not about trying to revert ecosystems to a mythical Eden. Instead, rewilders seek to rebuild resilience by restoring autonomous natural processes and letting them operate at scale to generate complexity. But rewilding, itself a human intervention, can take several turns to get right.

Whatever we do, the internet isn’t returning to old-school then-common interfaces like FTP and Gopher, or organizations operating their own mail servers again instead of off-the-shelf solutions like G-Suite. But some of what we need is already here, especially on the web. Look at the resurgence of RSS feeds, email newsletters and blogs, as we discover (yet again) that relying on one app to host global conversations creates a single point of failure and control. New systems are growing, like the Fediverse with its federated islands, or Bluesky with algorithmic choice and composable moderation.

We don’t know what the future holds. Our job is to keep open as much opportunity as we can, trusting that those who come later will use it. Instead of setting purity tests for which kind of internet is most like the original, we can test changes against the values of the original design. Do new standards protect the network’s “generality,” i.e. its ability to support multiple uses, or is functionality limited to optimize efficiency for the biggest tech firms?

As early as 1985, plant ecologists Steward T.A. Pickett and Peter S. White wrote in “The Ecology of Natural Disturbance and Patch Dynamics,” that an “essential paradox of wilderness conservation is that we seek to preserve what must change.” Some internet engineers know this. David Clark, a Massachusetts Institute of Technology professor who worked on some of the internet’s earliest protocols, wrote an entire book about other network architectures that might have been built if different values, like security or centralized management, had been prioritized by the internet’s creators.

But our internet took off because it was designed as a general-purpose network, built to connect anyone.

Our internet was built to be complex and unbiddable, to do things we cannot yet imagine. When we interviewed Clark, he told us that “‘complex’ implies a system in which you have emergent behavior, a system in which you can’t model the outcomes. Your intuitions may be wrong. But a system that’s too simple means lost opportunities.” Everything we collectively make that’s worthwhile is complex and thereby a little messier. The cracks are where new people and ideas get in.

Internet infrastructure is a degraded ecosystem, but it’s also a built environment, like a city. Its unpredictability makes it generative, worthwhile and deeply human. In 1961, Jane Jacobs, an American-Canadian activist and author of “The Death and Life of Great American Cities,” argued that mixed-use neighborhoods were safer, happier, more prosperous, and more livable than the sterile, highly controlling designs of urban planners like New York’s Robert Moses.

​ “As a top-down, built environment, the internet has become something that is done to us, not something we collectively remake every day.”

Just like the crime-ridden, Corbusier-like towers Moses crammed people into when he demolished mixed-use neighborhoods and built highways through them, today’s top-down, concentrated internet is, for many, an unpleasant and harmful place. Its owners are hard to remove, and their interests do not align with ours.

As Jacobs wrote: “As in all Utopias, the right to have plans of any significance belonged only to the planners in charge.” As a top-down, built environment, the internet has become something that is done to us, not something we collectively remake every day.

Ecosystems endure because species serve as checks and balances on each other. They have different modes of interaction, not just extraction, but mutualism, commensalism, competition and predation. In flourishing ecosystems, predators are subject to limits. They’re just one part of a complex web that passes calories around, not a one-way ticket to the end of evolution.

Ecologists know that diversity is resilience.

On July 18, 2001, 11 carriages of a 60-car freight train derailed in the Howard Street Tunnel under Mid-Town Belvedere, a neighborhood just north of downtown Baltimore. Within minutes, one carriage containing a highly flammable chemical was punctured. The escaping chemical ignited, and soon, adjacent carriages were alight in a fire that took about five days to put out. The disaster multiplied and spread. Thick, brick tunnel walls acted like an oven, and temperatures rose to nearly 2,000 degrees Fahrenheit. A more than three-foot-wide water main above the tunnels burst, flooding the tunnel with millions of gallons within hours. It only cooled a little. Three weeks later, an explosion linked to the combustible chemical blew out manhole covers located as far as two miles away.

WorldCom, then the second largest long-distance phone company in the U.S., had fiber-optic cables in the tunnel carrying high volumes of phone and internet traffic. However, according to Clark, the MIT professor, WorldCom’s resilience planning meant traffic was spread over different fiber networks in anticipation of just this kind of event.

On paper, WorldCom had network redundancy. But almost immediately, U.S. internet traffic slowed, and WorldCom’s East Coast and transatlantic phone lines went down. The region’s narrow physical topography had concentrated all those different fiber networks into a single chokepoint, the Howard Street Tunnel. WorldCom’s resilience was, quite literally, incinerated. It had technological redundancy, but not diversity. Sometimes we don’t notice concentration until it’s too late.

Clark tells the story of the Howard Street Tunnel fire to show that bottlenecks aren’t always obvious, especially at the operational level, and huge systems that seem secure, due to their size and resources, can unexpectedly crumble.

In today’s internet, much traffic passes through tech firms’ private networks, for example, Google and Meta’s own undersea cables. Much internet traffic is served from a few dominant content distribution networks, like Cloudflare and Akamai, who run their own networks of proxy servers and data centers. Similarly, that traffic goes through an increasingly small number of domain name system (DNS) resolvers, which work like phone books for the internet, linking website names to their numeric address.

All of this improves network speed and efficiency but creates new and non-obvious bottlenecks like the Howard Street Tunnel. Centralized service providers say they’re better resourced and skilled at attacks and failures, but they are also large, attractive targets for attackers and possible single points of system failure.

On Oct. 21, 2016, dozens of major U.S. websites suddenly stopped working. Domain names belonging to Airbnb, Amazon, PayPal, CNN and The New York Times simply didn’t resolve. All were clients of the commercial DNS service provider, Dyn, which had been hit by a cyberattack. Hackers infected [tens of thousands](https://coverlink.com/case-study/mirai-ddos-attack-on-dyn/#:~:text=Impacted internet platforms included PayPal,platforms in approximately two hours) of internet-enabled devices with malicious software, creating a network of hijacked devices, or a botnet, that they used to bombard Dyn with queries until it collapsed. America’s biggest internet brands were brought down by nothing more than a network of baby monitors, security webcams and other consumer devices. Although they all likely had resilience planning and redundancies, they went down because a single chokepoint — in one crucial layer of infrastructure — failed.

​ “Crashes, fires and floods may simply be entropy in action, but systemically concentrated and risky infrastructures are choices made manifest — and we can make better ones.”

Widespread outages due to centralized chokepoints have become so common that investors even use them to identify opportunities. When a failure by cloud provider Fastly took high-profile websites offline in 2021, its share price surged. Investors were delighted by headlines that informed them of an obscure technical service provider with an apparent lock on an essential service. To investors, this critical infrastructure failure doesn’t look like fragility but like a chance to profit.

The result of infrastructural narrowness is baked-in fragility that we only notice after a breakdown. But monoculture is also highly visible in our search and browser tools. Search, browsing and social media are how we find and share knowledge and how we communicate. They’re a critical, global epistemic and democratic infrastructure, controlled by just a few U.S. companies. Crashes, fires and floods may simply be entropy in action, but systemically concentrated and risky infrastructures are choices made manifest — and we can make better ones.

The Look & Feel Of A Rewilded Internet

A rewilded internet will have many more service choices. Some services like search and social media will be broken up, as AT&T eventually was. Instead of tech firms extracting and selling people’s personal data, different payment models will fund the infrastructure we need. Right now, there is little explicit provision for public goods like internet protocols and browsers, essential to making the internet work. The biggest tech firms subsidize and profoundly influence them.

Part of rewilding means taking what’s been pulled into the big tech stack back out of it, and paying for the true costs of connectivity. Some things like basic connectivity we will continue to pay for directly, and others, like browsers, we will support indirectly but transparently, as described below. The rewilded internet will have an abundance of ways to connect and relate to each other. There won’t be just one or two numbers to call if leaders of a political coup decide to shut the internet down in the middle of the night, as has happened in places like Egypt and Myanmar. No one entity will permanently be on top. A rewilded internet will be a more interesting, usable, stable and enjoyable place to be.

Through extensive research, Nobel-winning economist Elinor Ostrom found that “when individuals are well informed about the problem they face and about who else is involved, and can build settings where trust and reciprocity can emerge, grow, and be sustained over time, costly and positive actions are frequently taken without waiting for an external authority to impose rules, monitor compliance, and assess penalties.” Ostrom found people spontaneously organizing to manage natural resources — from water company cooperation in California to Maine lobster fishermen organizing to prevent overfishing.

Self-organization also exists as part of a key internet function: traffic coordination. Internet exchange points (IXPs) are an example of common-pool resource management, where internet service providers (ISPs) collectively agree to carry each other’s data for low or no cost. Network operators of all kinds — telecoms companies, large tech firms, universities, governments and broadcasters — all need to send large amounts of data through other ISPs’ networks so that it gets to its destination.

If they managed this separately through individual contracts, they’d spend much more time and money. Instead, they often form IXPs, typically as independent, not-for-profit associations. As well as managing traffic, IXPs have, in many — and especially developing — countries, formed the backbone of a flourishing technical community that further drives economic development.

Both between people and on the internet, connections are generative. From technical standards to common-pool resource management and even to more localized broadband networks known as “altnets,” internet rewilding already has a deep toolbox of collective action ready to be deployed.

The New Drive For Antitrust & Competition

The list of infrastructures to be diversified is long. As well as pipes and protocols, there are operating systems, browsers, search engines, the Domain Name System, social media, advertising, cloud providers, app stores, AI companies and more. And these technologies also intertwined.

But showing what can be done in one area creates opportunities in others. First, let’s start with regulation.

You don’t always need a big new idea like rewilding to frame and motivate major structural change. Sometimes reviving an old idea will do. President Biden’s 2021 “Executive Order on Promoting Competition in the American Economy” revived the original, pro-worker, trust-busting scope and urgency of the early 20th-century legal activist and Supreme Court Justice Louis D. Brandeis, along with rules and framings that date back to before the 1930s New Deal.

​ “Rewilding an already built environment isn’t just sitting back and seeing what tender, living thing can force its way through the concrete. It’s razing to the ground the structures that block out light for everyone not rich enough to live on the top floor.”

U.S. antitrust law was created to break the power of oligarchs in oil, steel and railroads who threatened America’s young democracy. It gave workers basic protections and saw equal economic opportunity as essential to freedom. This view of competition as essential was whittled away by Chicago School economic policies in the 1970s and Reagan-era judges’ court rulings over the decades. They believed intervention should only be permitted when monopoly power causes consumer prices to rise. The intellectual monoculture of that consumer-harm threshold has since spread globally.

It’s why governments just stood aside as 21st-century tech firms romped to oligopoly. If a regulator’s sole criterion for action is to make sure consumers don’t pay a penny more, then the free or data-subsidized services of tech platforms don’t even register. (Of course, consumers pay in other ways, as these tech giants exploit their personal information for profit.) This laissez-faire approach allowed the biggest firms to choke off competition by acquiring their competitors and vertically integrating service providers, creating the problems we have today.

Regulators and enforcers in Washington and Brussels now say they have learned that lesson and won’t allow AI dominance to happen as internet concentration did. Federal Trade Commission Chair Lina Khan and U.S. Department of Justice antitrust enforcer, Jonathan Kanter, are identifying chokepoints in the AI “stack” — concentration in control of processing chips, datasets, computing capacity, algorithm innovation, distribution platforms and user interfaces — and analyzing them to see if they affect systemic competition. This is potentially good news for people who want to prevent the current dominance of tech giants being grandfathered into our AI future.

In his 2021 signing of the executive order on competition, President Biden said: “Capitalism without competition isn’t capitalism; it’s exploitation.” Biden’s enforcers are changing the kinds of cases they take up and widening the applicable legal theories on harm that they bring to judges. Instead of the traditionally narrow focus on consumer prices, today’s cases argue that the economic harms perpetrated by dominant firms include those suffered by their workers, small companies and the market as a whole.

Khan and Kanter have jettisoned narrow and abstruse models of market behavior for real-world experiences of healthcare workers, farmers and writers. They get that shutting off economic opportunity fuels far-right extremism. They’ve made antitrust enforcement and competition policy explicitly about coercion versus choice, power versus democracy. Kanter told a recent conference in Brussels that “excessive concentration of power is a threat … it’s not just about prices or output but it’s about freedom, liberty and opportunity.”

Enforcers in Washington and Brussels are starting to preemptively block tech firms from using dominance in one realm to take over another. After scrutiny by the U.S. FTC and European Commission, Amazon recently abandoned its plan to acquire the home appliance manufacturer, iRobot. Regulators on both sides of the Atlantic have also moved to stop Apple from using its iPhone platform dominance to squeeze app store competition and dominate future markets through, for example, pushing the usage of CarPlay on automakers and limiting access to its tap-to-pay digital wallet in the financial services sector.

Still, so far, their enforcement actions have focused on the consumer-facing, highly visible parts of the tech giants’ exploitative and proprietary internet. The few, narrow measures of the 2021 executive order that aim to reduce infrastructure-based monopolies, only prevent future abuses like radio spectrum-hogging, not those already locked in. Sure, the best way to deal with monopolies is to stop them from happening in the first place. But unless regulators and enforcers eradicate the existing dominance of these giants now, we’ll be living in today’s infrastructure monopoly for decades, perhaps even a century.

Even activist regulators have shied away from applying the toughest remedies for concentration in long-consolidated markets, such as non-discrimination requirements, functional interoperability and structural separations, i.e. breaking companies up. And declaring that search and social media monopolies are actually public utilities — and forcing them to act as common carriers open to all — is still too extreme for most.

But rewilding a built environment isn’t just sitting back and seeing what tender, living thing can force its way through the concrete. It’s razing to the ground the structures that block out light for everyone not rich enough to live on the top floor.

​ “Ecologists have reoriented their field as a ‘crisis discipline,’ a field of study that’s not just about learning things but about saving them. We technologists need to do the same.”

When the writer and activist Cory Doctorow wrote about how to free ourselves from the clutches of Big Tech, he said that though breaking up big companies will likely take decades, providing strong and mandatory interoperability would open up innovative space and slow the flow of money to the largest firms — money they would otherwise use to deepen their moats.

Doctorow describes “comcom,” or competitive compatibility, as a kind of “guerrilla interoperability, achieved through reverse engineering, bots, scraping and other permissionless tactics.” Before a thicket of invasive laws sprung up to strangle it, comcom was how people figured out how to fix cars and tractors or re-write software. Comcom drives the try-every-tactic-until-one-works behavior you see in a flourishing ecosystem.

In an ecosystem, diversity of species is another way of saying “diversity of tactics,” as each successful new tactic creates a new niche to occupy. Whether it’s an octopus camouflaging itself as a sea snake, a cuckoo smuggling her chicks into another bird’s nest, orchids producing flowers that look just like a female bee, or parasites influencing rodent hosts to take life-ending risks, each evolutionary micro-niche is created by a successful tactic. Comcom is simply tactical diversity; it’s how organisms interact in complex, dynamic systems. And humans have demonstrated the epitome of short-term thinking by enabling the oligarchs who are trying to end it.

Efforts are underway. The EU already has several years of experience with interoperability mandates and precious insight into how determined firms work to circumvent such laws. The U.S., however, is still in its early days of ensuring software interoperability, for example, for videoconferencing.

Perhaps one way to motivate and encourage regulators and enforcers everywhere is to explain that the subterranean architecture of the internet has become a shadowland where evolution has all but stopped. Regulators’ efforts to make the visible internet competitive will achieve little unless they also tackle the devastation that lies beneath.

Next Steps

Much of what we need is already here. Beyond regulators digging deep for courage, vision and bold new litigation strategies, we need vigorous, pro-competitive government policies around procurement, investments and physical infrastructure. Universities must reject research funding from tech firms because it always comes with conditions, both spoken and unspoken.

Instead, we need more publicly funded tech research with publicly released findings. Such research should investigate power concentration in the internet ecosystem and practical alternatives to it. We need to recognize that much of the internet’s infrastructure is a de facto utility that we must regain control of.

We must ensure regulatory and financial incentives and support for alternatives including common-pool resource management, community networks, and the myriad other collaborative mechanisms people have used to provide essential public goods like roads, defense and clean water.

All this takes money. Governments are starved of tax revenue by the once-in-history windfalls seized by today’s tech giants, so it’s clear where the money is. We need to get it back.

We know all this, but still find it so hard to collectively act. Why?

Herded into rigid tech plantations rather than functioning, diverse ecosystems, it’s tough to imagine alternatives. Even those who can see clearly may feel helpless and alone. Rewilding unites everything we know we need to do and brings with it a whole new toolbox and vision.

Ecologists face the same systems of exploitation and are organizing urgently, at scale and across domains. They see clearly that the issues aren’t isolated but are instances of the same pathology of command and control, extraction and domination that political anthropologist Scott first noticed in scientific forestry. The solutions are the same in ecology and technology: aggressively use the rule of law to level out unequal capital and power, then rush in to fill the gaps with better ways of doing things.

Keep The Internet, The Internet

Susan Leigh Star, a sociologist and theorist of infrastructure and networks, wrote in her 1999 influential paper, “The Ethnography of Infrastructure”:

“Study a city and neglect its sewers and power supplies (as many have), and you miss essential aspects of distributional justice and planning power. Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change.”

The technical protocols and standards that underlie the internet’s infrastructure are ostensibly developed in open, collaborative standards development organizations (SDOs), but are also increasingly under the control of a few companies. What appear to be “voluntary” standards are often the business choices of the biggest firms.

The dominance of SDOs by big firms also shapes what does not get standardized — for example, search, which is effectively a global monopoly. While efforts to directly address internet consolidation have been raised repeatedly within SDOs, little progress has been made. This is damaging SDOs’ credibility, especially outside the U.S. SDOs must radically change or they will lose their implicit global mandate to steward the future of the internet.

We need internet standards to be global, open and generative. They’re the wire models that give the internet its planetary form, the gossamer-thin but steely-strong threads holding together its interoperability against fragmentation and permanent dominance.

*Make Laws & Standards Work Together*

In 2018, a small group of Californians maneuvered the Legislature into passing the [California Consumer Privacy Act](https://oag.ca.gov/privacy/ccpa#:~:text=The California Consumer Privacy Act,how to implement the law.). Nested in the statute was an unassuming provision, the “right to opt out of sale or sharing” your personal information via a “user-enabled global privacy control” or GPC signal that would create an automated method for doing so. The law didn’t define how GPC would work. Because a technical standard was required for browsers, businesses and providers to speak the same language, the signal’s details were delegated to a group of experts.

In July 2021, California’s attorney general mandated that all businesses use the newly created GPC for California-based consumers visiting their websites. The group of experts is now shepherding the technical specification through global web standards development at the World Wide Web Consortium. For California residents, GPC automates the request to “accept” or “reject” sales of your data, such as cookie-based tracking, on its websites. However, it isn’t yet supported by major default browsers like Chrome and Safari. Broad adoption will take time, but it’s a small step in changing real-world outcomes by driving antimonopoly practices deep into the standards stack — and it’s already being [adopted](https://usercentrics.com/knowledge-hub/what-is-global-privacy-control/#:~:text=United States and state-level laws and GPC,-Six new data&text=The laws in California%2C Connecticut,to respect Global Privacy Control.) elsewhere.

GPC is not the first legally mandated open standard, but it was deliberately designed from day one to bridge policymaking and standards-setting. The idea is gaining ground. A recent United Nations Human Rights Council report recommends that states delegate “regulatory functions to standard-setting organizations.”

Make Service-Providers — Not Users — Transparent

Today’s internet offers minimal transparency of key internet infrastructure providers. For example, browsers are highly complex pieces of infrastructure that determine how billions of people use the web, yet they are provided for free. That’s because the most commonly used search engines enter into opaque financial deals with browsers, paying them to be set as the default. Since few people change their default search engine, browsers like Safari and Firefox make money by defaulting the search bar to Google, locking in its dominance even as the search engine’s quality of output declines.

This creates a quandary. If antitrust enforcers were to impose competition, browsers would lose their main source of income. Infrastructure requires money, but the planetary nature of the internet challenges our public funding model, leaving the door open to private capture. However, if we see the current opaque system as what it is, a kind of non-state taxation, then we can craft an alternative.

Search engines are a logical place for governments to mandate the collection of a levy that supports browsers and other key internet infrastructure, which could be financed transparently under open, transnational, multistakeholder oversight.

Make Space To Grow

We need to stop thinking of internet infrastructure as too hard to fix. It’s the underlying system we use for nearly everything we do. The former prime minister of Sweden, Carl Bildt, and former Canadian deputy foreign minister, Gordon Smith, wrote in 2016 that the internet was becoming “the infrastructure of all infrastructure.” It’s how we organize, connect and build knowledge, even — perhaps — planetary intelligence. Right now, it’s concentrated, fragile and utterly toxic.

Ecologists have reoriented their field as a “crisis discipline,” a field of study that’s not just about learning things but about saving them. We technologists need to do the same. Rewilding the internet connects and grows what people are doing across regulation, standards-setting and new ways of organizing and building infrastructure, to tell a shared story of where we want to go. It’s a shared vision with many strategies. The instruments we need to shift away from extractive technological monocultures are at hand or ready to be built.

Permalink
June 19, 2024 at 8:58:56 PM GMT+2

Jonathan Haidt Talks His New Book ‘The Anxious Generation’https://www.noemamag.com/social-media-messed-up-our-kids-now-it-is-making-us-ungovernable/

  • Social Network
  • Societal Collapse
  • Attention Economy
  • Psychology
  • Social Network
  • Societal Collapse
  • Attention Economy
  • Psychology

Social Media Messed Up Our Kids. Now It Is Making Us Ungovernable.

Jonathan Haidt talks to Noema about “The Anxious Generation” and how technology is upending democracy.

InterviewDigital Society June 13, 2024

In a conversation with Noema editor-in-chief Nathan Gardels, the social psychologist Jonathan Haidt discusses the impact of social media on truth in politics, the mental health crisis of today’s youth, and what to do about it.

Nathan Gardels: For those who haven’t read your book, “The Anxious Generation,” can you summarize the main thesis?

Jonathan Haidt: It all begins with a mystery: Why is it that mental health statistics for American teenagers were pretty flat, with no sign of any problem, from the late ’90s through 2010 to 2011? That is true whether we look at depression, anxiety or self-harm. And then, all of a sudden, in 2012, it’s as though someone flipped a switch, and the girls began getting much more anxious, depressed and self-harming. It was true of boys too, but it’s not been so sudden. It was more gradual in the early 2010s.

We first discovered this on college campuses because the students who entered universities from 2014 to 2015 were very different from our stereotype of college students who want to have fun, who want to drink and party.

The students arriving in 2014 to 15 were much more anxious. And they were especially triggered by words or jokes, speakers or books. It was that observation that led Greg Lukianoff to propose the hypothesis that college is doing something to kids to make them think in this distorted way. That was the basis of our book “The Coddling of the American Mind.”

But now it’s becoming clearer that what we saw and wrote about in that book wasn’t just happening to college students, but actually to all teenagers born after 1995. And it was not only observable in the U.S., Britain and Canada but a lot of other countries as well. What happened? Why was it so sudden? So that’s the mystery.

Was it some chemical dropped in the water supply all over North America and Northern Europe, along with the South Pacific? Or was it the massive change in the technological environment of childhood in all these countries simultaneously? This seemed the obvious hypothesis.

So, the first chapter of “The Anxious Generation” discusses what actually happened to teen mental health. And then the rest of the book seeks to unravel the mystery. It’s not just about “social media is destroying everybody.” It’s a more subtle and interesting story about the transformation of childhood — a tragedy that occurred in three acts.

Act I, which I only hinted at in the book, was the loss of community. So, if you look at America, especially in the years just after World War II, social capital was very high. The best way to make people trust each other is to have someone attack them from the outside — come together, fight a war and win. Social capital was very high in the U.S. in the 1940s and 1950s, and then it begins to drop over succeeding decades for many reasons.

Robert Putnam talked about this in “Bowling Alone.” You have smaller family sizes; people retreat inside because now they have air conditioning and TV and they’re not out in the front yard socializing as much. So, for a lot of reasons, we begin to lose trust in each other. We begin to lose social capital. That’s Act I of the tragedy.

Because of that, Act II happens, which is when we take away play-based childhood. Children used to always play together. It didn’t matter if it was raining or snowing, if there was a crime wave or drunk drivers, kids went out to play. Like all mammals, we evolved to play, in order to wire up our relatively large brains.

But in the ’90s, we decided it was too dangerous for kids to be out and about. They’ll get kidnapped or sexually abused, we thought, because we no longer trusted our neighbors. So, we locked our kids up out of fear of each other. In other words, over protection. This is the coddling part.

Then, after losing strong communities and play-based childhoods, we’re ready for the third act in the tragedy: the massive, sudden transformation of childhood between 2010 and 2015 into a phone-based childhood.

In 2010, the vast majority of teens across the developed world had cell phones. But they were flip phones or basic phones, with no internet browser. All you could do with them is text and call. That was pretty much it aside from some games. It wasn’t for constant communication. And that’s good. Kids could text their friends and say, “Let’s meet up at 3 p.m.” It was a simple tool. There was very little high-speed internet then and no front-facing camera. There was Facebook, but no Instagram. That’s the way things were in 2010.

“All of a sudden, in 2012, it’s as though someone flipped a switch, and the girls began getting much more anxious, depressed and self-harming.”

In 2010, kids in the U.S. and other Anglo countries still had a recognizably human childhood. They would meet up in person, even if they now had less freedom to roam. By 2015, that all changed when about 80% of those kids had a smartphone with a front-facing camera and a bunch of social media apps. So now we have the selfie culture. Almost everyone now has high-speed internet and now everyone can display video.

In short, by 2015 we have what I call “the great rewiring of childhood.” And that’s why in 2012, which is the year, incidentally, that Facebook bought Instagram, when online life changed, especially for girls, who flocked onto Instagram. And it was right after that when we first noticed the widespread upsurge in anxiety, depression and self-harm.

Gardels: The main criticism of your thesis is that you are mistaking correlation for cause and being too technologically determinist. How do you respond to that?

Haidt: First of all, my story is not just about technology, it is sociological. It’s a cultural psychology story. It’s about the change of childhood and human development.

To those who argue these changes could have been caused by any number of factors, I say a couple of things. First, whatever other factor you might think was more determinative, did that happen in New Zealand and Iceland and Australia all at the same time? No one can identify such a factor. Nobody has proposed an alternative theory that works internationally.

Second, it is true that the data is mostly correlational. If you have 300 correlational studies and 25 experimental studies, I would say the data is mostly correlational. The scientific debate has been focused on a very, very narrow question: Do the hours spent on social media tell you anything about the level of mental illness, especially depression and anxiety? There’s a clear correlation in these studies.

But we also have experimental studies, which I cite in the book. I go into great detail about the difference between correlation and causation. Every week, every month, we have more experiments indicating the causality of anxiety-inducing technology.

There are so many causal pathways by which a phone-based childhood harms different kids in different ways. Let me just take the example of sextortion, a very common crime online. There are international sextortion gangs that display avatars of beautiful, sexy young women. An avatar flirts with a boy that she finds, usually on Instagram. And then she convinces him to swap nude images. Boom. Then the sextortionist reveals himself, not as a sexy girl but as a man who now has all the content he needs to ruin you: “I’m going to show this picture of you and your penis to everyone, because I have all your contacts, unless you pay me $500 in two hours.”

The boys panic, and some of them have killed themselves because of the shame. The FBI has identified 20 suicides that were direct results of sextortion, which means there are probably hundreds of cases they didn’t catch, and far more kids who were traumatized by the experience and the shame. Now, is that just a correlation? Would these boys have killed themselves anyway, even if they had not been sextorted? I don’t think so.

Gardels: What are the specific remedies you propose for parents to protect their kids?

Haidt: The key to the whole book is understanding collective action problems, which are sometimes referred to as “the tragedy of the commons,” where each person acting in their own interest ends up bringing about an outcome that’s bad for everyone. If you’re the only one who doesn’t put your sheep out to graze, if you’re the only one who doesn’t fish in the pond, you suffer while everyone else continues to do what they’re doing.

One of the main reasons that we all are giving our kids phones now at age nine or 10 — it gets younger all the time — is because the kid comes home from school and says, “Mom, everyone else has an iPhone, I have to have an iPhone, or I’ll be left out.”

This is a collective action problem because any parent who does the right thing and says, “No, you’re not going to get one until you’re mostly done with puberty,” is imposing a cost on their child. All over the developed world now, family life has devolved into a struggle over screen time and phones. This is terrible. So, the trick is to realize we’re in this problem because everybody else is in this problem.

“All over the developed world now, family life has devolved into a struggle over screen time and phones.”

We’re so deep into this that it is very hard for any family to get out of it by themselves. Some parents are tough and just say “no,” but the status environment doesn’t change for the kids.

What I’m trying to do with the book is to say, if we team up with a few other families, if a small group of parents can get the whole school or school district to say “no,” then they escape and we can change the situation very, very quickly.

What we need is the adoption of four norms that can break the back of the collective action problem.

One: No smartphone before high school. Just keep it out of middle school. Let the kids at least get through early puberty, which is the most sensitive period. You can give them a flip phone if you absolutely need to text. I understand the need to coordinate.

Two: No social media before the age of 16. Social media is entirely inappropriate for children, it cannot be made appropriate because what you’re basically doing is saying, “How about we let the entire world get in touch with you? Let’s let all the companies try to sell things to you, let men all over the world who want to have sex with you contact you, and try to trick you into sending photos.” There’s no way to make this safe. So just recognize that social media is a tool for adults. Eleven-year-olds don’t need to network with strangers.

Third: Schools need to be phone-free. Imagine that when I was a kid growing up in the ’70s, if we had been allowed to bring in our television sets and our radios along with all sorts of toys and games and put them on our desk and use them during class. That’s what teachers are facing today. Disgusted and frustrated that they can’t get through to students, teachers are quitting.

Also, global test scores have been dropping, since 2012. This did not begin with Covid. It began around 2012. The result is a massive destruction of human capital. So, it’s just kind of obvious. You can’t have kids have the greatest distraction device ever invented in their pockets while they’re in class. All kids must check their phones during the day. If others are texting, they have to be texting back. So, just lock up the phone in the morning to give it back at the end of the day.

Four: We need to restore a play-based childhood. Kids need more independence, free play and responsibility in the real world. If you’re going to roll back the phone and don’t restore play, a child can have no childhood. So, roll it back and instead, give them adventure and fun with other kids.

Us parents need to overcome our own fears and let our children learn how to play with each other. Kids playing in groups are very safe. That’s how they learn to get along. That’s how they’re going to resolve disputes in life.

If we do these four things I’m pretty confident that rates of mental illness will come down within two years. Experience so far shows that phone-free schools get great results within a month. In various childhood independence projects, you get results within a month. If any community does all four of these, I believe they’re going to see pretty big drops in depression, anxiety, self-harm and other problems in short order.

Gardels: Do you worry that more prosperous parents with the means and time to be attentive to their kids will follow your advice, while the less well-off, busy working two jobs with less time for their kids, won’t? That this will just create a greater gap in society?

Haidt: Yes, I do expect that it will begin this way, with the most educated and wealthy families. But I think it will spread quickly as parents begin to see and hear about the benefits. Also, I should note that the most educated families apply the most limits, whereas children in low socioeconomic status, single-parent, or Black or Hispanic families have one- to two- hours more screen time per day, so going phone-free will disproportionately help them.

Gardels: Implicit in your remarks is you don’t have any faith in the Instagrams or TikToks of the world to be able to regulate themselves so they do less harm?

“What we need is the adoption of four norms that can break the back of the collective action problem.”

Haidt: Right now, as long as you’re old enough to lie about your age, you can go to Pornhub. You can open 20 Instagram accounts, you can open TikTok accounts. The law says you have to be 13 to sign a contract with a company to give away your data without your parents’ knowledge. But the law is written in such a way that there’s no responsibility for the companies if they don’t know your real age. As long as they don’t know your real age, they can’t be held liable for serving you eating disorder content or sex and violence.

We’re talking about five to 10 companies here that own our children’s childhood. They have a lot more influence over our kids than we do in some ways. And they have no responsibility. They are literally protected from lawsuits by Section 230 of the Communications Decency Act, which shields them from liability for the content on their platforms.

This is a completely insane situation. And they’re making huge amounts of money. So no, I don’t expect them to do anything until they’re forced by legislation, or by enormous losses in court.

Gardels: Your book has obviously hit a chord with parents and with school authorities. Do you have any sense of how the TikTok crowd or kids themselves see it?

Haidt: When you survey kids who’ve been through this, it’s really hard to find members of Gen Z who are opposed to what I’m saying. In fact, I actually haven’t found any. They almost always say, “Yeah, you know, you’re right. This really messed us up. But, you know, what are you going to do? This is just the way things are, and I can’t quit because everyone else is on.” There’s just an extraordinary sense of fatalism. We don’t find any young people organizing to protect their rights to have these things. The older kids generally say, if we could get everyone off, we should do that.

Gardels: The Chinese cyberspace authorities have no qualms about imposing limits on social media. Here are the rules:

  • Children under 8: Can only use smart devices for 40 minutes per day and can only consume content about “elementary education, hobbies and interests, and liberal arts education”
  • Children aged 8 to 15: Can use their phone for no more than one hour per day
  • Children aged 16 to 17: Can use a handset for a maximum of two hours per day
  • Minor mode: Requires mobile devices, apps and app stores to have a built-in mode that would bar users under 18 from accessing the internet on mobile devices from 10 p.m. to 6 a.m.

Perhaps they will produce more mentally healthy kids?

Haidt: China is engaged in a battle with the United States for cultural and economic supremacy. Since our young people are giving away all of their available attention, there’s a good chance that they will be less creative and less productive. They don’t have any spare attention to actually do anything. I imagine that makes the Chinese government happy.

The worst single product for American children is TikTok. It sucks up more of their time, energy and attention than any other product. And it harms them. It doesn’t do anything good for them. TikTok has more influence over our kids than any other organization on the planet. So, there are many reasons to think that that is a danger not only to our kids, but to our country.

It seems the Chinese are doing the right thing by using their authoritarian system to reduce the damage to their own children.

Of course, authoritarian solutions are not right for us, but we can do similar things through democratic solutions, through community and civil society. One thing Tocqueville praised Americans about is that when something needs doing, say the townspeople need to build a bridge, they just do it. They don’t wait for the state like in France. They don’t wait for the King like in Britain. Americans come together as citizens, elect a leader, raise money and then they do it.

So, I’m hopeful that my book presents norms that we adopt ourselves, even if we never get any help from Congress or lawmakers. Doing it ourselves — in groups of parents organized around schools — is a very American solution to what I think is one of the largest problems facing America today.

“TikTok has more influence over our kids than any other organization on the planet.”

Gardels: To go back to the coddled generation argument. What do you make of all these kids in college today putting up barricades, occupying administration buildings protesting the war in Gaza?

Haidt: Most of the activism of the college kids has moved online. That tends to be very ineffective and creates a culture that is bad for activists. I put some research in the book showing that before 2010, being politically active was actually associated with better mental health. You were engaged, you were part of a group, you were energized. After 2010, activists, especially progressive activists, are the least happy people in the country. They are marinating in beliefs about oppressor versus victim and embracing the untruths of the coddled. That was certainly true until very recently.

Now it’s true these protests are in person. That’s at least better psychologically for them. They are physically present and interacting with others on campus.

Even so, I think there are signs that it’s different from previous generations. One is that the present protestors are expecting accommodation, often seeking not to be punished for missing classes and for delayed exams. In other words, they are expecting a low cost to themselves. In previous periods of activism, civil disobedience meant if you break the law, then you pay the consequences to show how committed you are to the cause.

To be sure, today’s actions are communal, which is always very exciting. It’s not as though Gen Z is incapable of acting in person; though, I would point out, it’s overwhelmingly at the elite schools that this is happening.

Gardels: One of the reasons that we have such a paralyzed and polarized society is that the public square has virtually disappeared. Until social media turbocharged fragmentation, there was a common space where competing ideas could be contested in the full gaze of the body politic.

As the philosopher Byung-Chul Han has observed, the peer-to-peer connectivity of social media redirects the flow of communication. Information is spread without forming a public sphere. It is produced in private spaces and distributed to private spaces. The web does not create a public.

The possibility of arriving at a governing consensus through negotiation and compromise is being shattered by a cacophony of niche propagandists egging on their own siloed tribe of the faithful to engage in an endless partisan battle. Indeed, Rene DiResta at Stanford calls the niche ideologues “the new media goliaths” who have supplanted mainstream platforms in terms of influence.

In short, the digital media ecosystem is disempowering the public sphere.

In this sense, social media is not only messing up our kids but undermining the basis of democratic discourse.

Do you agree with that?

Haidt: Absolutely. In an article for the Atlantic in 2019, I made the case, basically along the lines of Han, that massive changes in information flows and the way we connect people change the fundamental ground within which our democratic institutions are operating. And it’s quite possible that we are now so far outside the operating range of these institutions that they will fail.

I’m extremely alarmed about the future of this country. If you read Federalist #10, the Founding Fathers, who were excellent social psychologists, were very afraid of the passions of the people. They didn’t want us to have a direct democracy. They wanted cooling mechanisms of deliberation through reason. The system of governance they devised, with its checks and balances, is really like a complicated clock that they thought could last a very long time precisely because it was realistic about human frailties. And they were right.

Then all of a sudden in the later post-war era — first with television, then the internet and, especially, now peer-to-peer media, it is all going awry. With television, at least there were editors. Jonathan Rauch wrote an amazing book called “The Constitution of Knowledge,” both about the Constitution and how knowledge is constituted.

He discussed how we make knowledge in universities and science and medicine. But he also discussed the U.S. Constitution and how the community of knowledge makers are governed by certain rules and checks and balances. We developed editors, filters and other mechanisms to vet truth.

All that’s going away now. Or at least the institutions are so weakened as to be feeble. I’m very alarmed. And, at the same time, what’s replacing them are the sorts of peer-to-peer networks that you’re talking about.

“Until social media turbocharged fragmentation, there was a common space where competing ideas could be contested in the full gaze of the body politic.”

In the history of humanity, when you connect people, there could be disruptions. But in the long run, that’s good. It increases the flow of knowledge and increases creativity. You get more value when you connect people. So, the telephone was great, the postal system was great.

Social media is not like those earlier innovations. I think the best metaphor here is to imagine a public square in which people talk to each other. They debate ideas or put forth ideas that may not always be brilliant. They may not always be civil, but people can speak while others listen. Sometimes people are moved by persuasion or dissuasion.

I think the Founding Fathers assumed that’s about the best we can hope for. Imagine one day, and I’ll call it 2009, that all changes. There’s no more public square. Everything takes place in the center of the Roman Colosseum. The stands are full of people who are there to see blood. That’s what they came for. They don’t want to see the lion and the Christian making nice; they want the one to kill the other. That’s what Twitter is often like.

It all becomes performative and comes at a superfast pace. Just as television changed the way we are and made us into passive consumers, the central act in social media is posting, judging, criticizing and joining mobs. Donald Trump is the quintessential person who thrives in that environment. If not for Twitter, Trump never could have been president. So, when our politics moved into the Roman Colosseum, I think the Founding Fathers would have said, “Let’s just give up. There’s no way we can build a democracy in this environment.”

Gardels: Just as republics have historically created institutional checks and balances when too much power is concentrated in one place, so too don’t we need to foster checks and balances for an age when power is so distributed that the public sphere is disempowered?

What I have in mind are the citizens’ assemblies indicative of the public as a whole, which deliberate issues in a non-partisan environment and, outside the electoral sphere where partisans vie for power by any means necessary, are able to come to a consensus through pragmatic, common sense solutions?

Haidt: It’s possible to create these small artificial communities where you lock citizens away together for a week and have them discuss something. They work pretty well from what I know, and they come up with solutions. But it’s not clear to me how you could use that to run a country. The way people feel about let’s say, Donald Trump, has very little to do with some ascertainment of fact.

If you use the word power, then I’m a little bit confused. But I think I see what you’re getting at. If we change the word to authority, it is clearer to me. When I wrote “The Righteous Mind,” I was on the left then and really tried to understand conservatives. Reading conservative writings, especially Edmund Burke and Thomas Sowell, was really clarifying on the idea that we need institutions. We need religion, we need gods, even if it is not true. We need moral order and constraint.

The progressive impulse is to tear things down and make things new. The conservative impulse is to protect authority structures because we need them. Without them, we have chaos. Of course, there are times to tear things down. But I think during the 2010s everything has been torn down, to some extent. This is a time we need to build.

I am very concerned that there is no longer any source of authority. There is no trusted authority, there is no way to find consensus on truth. It seems that the truth-seeking mechanisms, including the courts, came up with the answer that the last presidential election in the U.S. was not stolen. But there’s no real way to spread that around to the large portion of society that believes that it was.

With AI coming in, the problem of the loss of authority is going to be magnified tenfold or even a hundredfold when anyone can create a video of anyone saying anything in that person’s voice. It’s going to be almost impossible to know what’s true. We’re in for a wild ride if we’re going to try to run a democratic republic with no real authority. My fear is that we will simply become ungovernable. I hope not, I hope we find a way to adapt to living in our world after the fall of the tower of Babel, the fall of common understandings and common language.

This interview was edited for brevity and clarity.

Permalink
June 18, 2024 at 8:56:27 PM GMT+2

Pop Culture Has Become an Oligopoly - by Adam Mastroiannihttps://www.experimental-history.com/p/pop-culture-has-become-an-oligopoly

  • Cultural Downfall
  • Cultural Uniformism
  • Big Corpo
  • Cultural Downfall
  • Cultural Uniformism
  • Big Corpo

Pop Culture Has Become an Oligopoly

A cartel of superstars has conquered culture. How did it happen, and what should we do about it?

Adam Mastroianni May 02, 2022

You may have noticed that every popular movie these days is a remake, reboot, sequel, spinoff, or cinematic universe expansion. In 2021, only one of the ten top-grossing films––the Ryan Reynolds vehicle Free Guy––was an original. There were only two originals in 2020’s top 10, and none at all in 2019.

People blame this trend on greedy movie studios or dumb moviegoers or competition from Netflix or humanity running out of ideas. Some say it’s a sign of the end of movies. Others claim there’s nothing new about this at all.

Some of these explanations are flat-out wrong; others may contain a nugget of truth. But all of them are incomplete, because this isn’t just happening in movies. In every corner of pop culture––movies, TV, music, books, and video games––a smaller and smaller cartel of superstars is claiming a larger and larger share of the market. What used to be winners-take-some has grown into winners-take-most and is now verging on winners-take-all. The (very silly) word for this oligopoly, like a monopoly but with a few players instead of just one.

I’m inherently skeptical of big claims about historical shifts. I recently published a paper showing that people overestimate how much public opinion has changed over the past 50 years, so naturally I’m on the lookout for similar biases here. But this shift is not an illusion. It’s big, it’s been going on for decades, and it’s happening everywhere you look. So let’s get to the bottom of it.

(Data and code available here.)

Movies

At the top of the box office charts, original films have gone extinct.

I looked at the 20 top-grossing movies going all the way back to 1977 (source), and I coded whether each was part of what film scholars call a “multiplicity”—sequels, prequels, franchises, spin-offs, cinematic universe expansions, etc. This required some judgment calls. Lots of movies are based on books and TV shows, but I only counted them as multiplicities if they were related to a previous movie. So 1990’s Teenage Mutant Ninja Turtles doesn’t get coded as a multiplicity, but 1991’s Teenage Mutant Ninja Turtles II: The Secret of the Ooze does, and so does the 2014 Teenage Mutant Ninja Turtles remake. I also probably missed a few multiplicities, especially in earlier decades, since sometimes it’s not obvious that a movie has some connection to an earlier movie.

Regardless, the shift is gigantic. Until the year 2000, about 25% of top-grossing movies were prequels, sequels, spinoffs, remakes, reboots, or cinematic universe expansions. Since 2010, it’s been over 50% ever year. In recent years, it’s been close to 100%.

img

Original movies just aren’t popular anymore, if they even get made in the first place.

Top movies have also recently started taking a larger chunk of the market. I extracted the revenue of the top 20 movies and divided it by the total revenue of the top 200 movies, going all the way back to 1986 (source). The top 20 movies captured about 40% of all revenue until 2015, when they started gobbling up even more.

img

Television

Thanks to cable and streaming, there's way more stuff on TV today than there was 50 years ago. So it would make sense if a few shows ruled the early decades of TV, and now new shows constantly displace each other at the top of the viewership charts.

Instead, the opposite has happened. I pulled the top 30 most-viewed TV shows from 1950 to 2019 (source) and found that fewer and fewer franchises rule a larger and larger share of the airwaves. In fact, since 2000, about a third of the top 30 most-viewed shows are either spinoffs of other shows in the top 30 (e.g., CSI and CSI: Miami) or multiple broadcasts of the same show (e.g., American Idol on Monday and American Idol on Wednesday).

img

Two caveats to this data. First, I’m probably slightly undercounting multiplicities from earlier decades, where the connections between shows might be harder for a modern viewer like me to understand––maybe one guy hosted multiple different shows, for example. And second, the Nielsen ratings I’m using only recently started accurately measuring viewership on streaming platforms. But even in 2019, only 14% of viewing time was spent on streaming, so this data isn’t missing much.

Music

It used to be that a few hitmakers ruled the charts––The Beatles, The Eagles, Michael Jackson––while today it’s a free-for-all, right?

Nope. A data scientist named Azhad Syed has done the analysis, and he finds that the number of artists on the Billboard Hot 100 has been decreasing for decades.

img

Chart by Azhad Syed

And since 2000, the number of hits per artist on the Hot 100 has been increasing.

img

Chart by Azhad Syed

(Azhad says he’s looking for a job––you should hire him!)

A smaller group of artists tops the charts, and they produce more of the chart-toppers. Music, too, has become an oligopoly.

Books

Literature feels like a different world than movies, TV, and music, and yet the trend is the same.

Using LiteraryHub's list of the top 10 bestselling books for every year from 1919 to 2017, I found that the oligopoly has come to book publishing as well. There are a couple ways we can look at this. First, we can look at the percentage of repeat authors in the top 10––that is, the number of books in the top 10 that were written by an author with another book in the top 10.

img

It used to be pretty rare for one author to have multiple books in the top 10 in the same year. Since 1990, it’s happened almost every year. No author ever had three top 10 books in one year until Danielle Steel did it 1998. In 2011, John Grisham, Kathryn Stockett, and Stieg Larsson all had two chart-topping books each.

We can also look at the percentage of authors in the top 10 were already famous––say, they had a top 10 book within the past 10 years. That has increased over time, too.

img

In the 1950s, a little over half of the authors in the top 10 had been there before. These days, it’s closer to 75%.

Video games

I tracked down the top 20 bestselling video games for each year from 1995 to 2021 (sources: 1, 2, 3, 4, 5, 6, 7) and coded whether each belongs to a preexisting video game franchise. (Some games, like Harry Potter and the Sorcerer’s Stone, belong to franchises outside of video games. For these, I coded the first installment as originals and any subsequent installments as franchise games.)

The oligopoly rules video games too:

img

In the late 1990s, 75% or less of bestselling video games were franchise installments. Since 2005, it’s been above 75% every year, and sometimes it’s 100%. At the top of the charts, it’s all Mario, Zelda, Call of Duty, and Grand Theft Auto.

Why is this happening?

Any explanation for the rise of the pop oligopoly has to answer two questions: why have producers started producing more of the same thing, and why are consumers consuming it? I think the answers to the first question are invasion, consolidation, and innovation. I think the answer to the second question is proliferation.

Invasion

Software and the internet have made it easier than ever to create and publish content. Most of the stuff that random amateurs make is crap and nobody looks at it, but a tiny proportion gets really successful. This might make media giants choose to produce and promote stuff that independent weirdos never could, like an Avengers movie. This can’t explain why oligopolization started decades ago––YouTube only launched in 2005, for example, and most Americans didn’t have broadband until 2007––but it might explain why it’s accelerated and stuck around.

Consolidation

Big things like to eat, defeat, and outcompete smaller things. So over time, big things should get bigger and small things should die off. Indeed, movie studios, music labels, TV stations, and publishers of books and video games have all consolidated. Maybe it’s inevitable that major producers of culture will suck up or destroy everybody else, leaving nothing but superstars and blockbusters. Indeed, maybe cultural oligopoly is merely a transition state before we reach cultural monopoly.

Innovation

You may think there’s nothing left to discover in art forms as old as literature and music, and that they simply iterate as fashions change. But it took humans [thousands of years](http://www.essentialvermeer.com/technique/perspective/history.html#:~:text=In its mathematical form%2C linear,De pictura [On Painting]) to figure out how to create the illusion of depth in paintings. Novelists used to think that sentences had to be long and complicated until Hemingway came along, wrote some snappy prose, and changed everything. Even very old art forms, then, may have secrets left to discover. Maybe the biggest players in culture discovered some innovations that won them a permanent, first-mover chunk of market share. I can think of a few:

  • In books: lightning-quick plots and chapter-ending cliffhangers. Nobody thinks The Da Vinci Code is high literature, but it’s a book that really really wants you to read it. And a lot of people did!
  • In music: sampling. Musicians [seem to sample more often these days](https://www.hypebot.com/hypebot/2019/03/2019-the-state-of-sampling-draft.html#:~:text=1 in 5 Songs on,usually between 20-25%). Now we not only remake songs; we franchise them too.
  • In movies, TV, and video games: cinematic universes. Studios have finally figured out that once audiences fall in love with fictional worlds, they want to spend lots of time in them. Marvel, DC, and Star Wars are the most famous, but there are also smaller universe expansions like Better Call Saul and El Camino from Breaking Bad and The Many Saints of Newark from The Sopranos. Video game developers have understood this for even longer, which is why Mario does everything from playing tennis to driving go-karts to, you know, being a piece of paper.

Proliferation

Invasion, consolidation, and innovation can, I think, explain the pop oligopoly from the supply side. But all three require a willing audience. So why might people be more open to experiencing the same thing over and over again?

As options multiply, choosing gets harder. You can’t possibly evaluate everything, so you start relying on cues like “this movie has Tom Hanks in it” or “I liked Red Dead Redemption, so I’ll probably like Red Dead Redemption II,” which makes you less and less likely to pick something unfamiliar.

Another way to think about it: more opportunities means higher opportunity costs, which could lead to lower risk tolerance. When the only way to watch a movie is to go pick one of the seven playing at your local AMC, you might take a chance on something new. But when you’ve got a million movies to pick from, picking a safe, familiar option seems more sensible than gambling on an original.

This could be happening across all of culture at once. Movies don’t just compete with other movies. They compete with every other way of spending your time, and those ways are both infinite and increasing. There are now [60,000](https://www.gutenberg.org/#:~:text=Project Gutenberg is a library of over 60%2C000 free eBooks) free books on Project Gutenberg, Spotify [says](https://newsroom.spotify.com/company-info/#:~:text=Discover%2C manage and share over,ad-free music listening experience) it has 78 million songs and 4 million podcast episodes, and humanity uploads 500 hours of video to YouTube [every minute](https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/#:~:text=As of February 2020%2C more,for online video has grown). So uh, yeah, the Tom Hanks movie sounds good.

What do we do about it?

Some may think that the rise of the pop oligopoly means the decline of quality. But the oligopoly can still make art: Red Dead Redemption II is a terrific game, “Blinding Lights” is a great song, and Toy Story 4 is a pretty good movie. And when you look back at popular stuff from a generation ago, there was plenty of dreck. We’ve forgotten the pulpy Westerns and insipid romances that made the bestseller lists while books like The Great Gatsby, Brave New World, and Animal Farm did not. American Idol is not so different from the televised talent shows of the 1950s. Popular culture has always been a mix of the brilliant and the banal, and nothing I’ve shown you suggests that the ratio has changed.

The problem isn’t that the mean has decreased. It’s that the variance has shrunk. Movies, TV, music, books, and video games should expand our consciousness, jumpstart our imaginations, and introduce us to new worlds and stories and feelings. They should alienate us sometimes, or make us mad, or make us think. But they can’t do any of that if they only feed us sequels and spinoffs. It’s like eating macaroni and cheese every single night forever: it may be comfortable, but eventually you’re going to get scurvy.

We haven’t fully reckoned with what the cultural oligopoly might be doing to us. How much does it stunt our imaginations to play the same video games we were playing 30 years ago? What message does it send that one of the most popular songs in the 2010s was about how a 1970s rock star was really cool? How much does it dull our ambitions to watch 2021’s The Matrix: Resurrections, where the most interesting scene is just Neo watching the original Matrix from 1999? How inspiring is it to watch tiny variations on the same police procedurals and reality shows year after year? My parents grew up with the first Star Wars movie, which had the audacity to create an entire universe. My niece and nephews are growing up with the ninth Star Wars movie, which aspires to move merchandise. Subsisting entirely on cultural comfort food cannot make us thoughtful, creative, or courageous.

Fortunately, there’s a cure for our cultural anemia. While the top of the charts has been oligopolized, the bottom remains a vibrant anarchy. There are weird books and funky movies and bangers from across the sea. Two of the most interesting video games of the past decade put you in the role of an immigration officer and an insurance claims adjuster. Every strange thing, wonderful and terrible, is available to you, but they’ll die out if you don’t nourish them with your attention. Finding them takes some foraging and digging, and then you’ll have to stomach some very odd, unfamiliar flavors. That’s good. Learning to like unfamiliar things is one of the noblest human pursuits; it builds our empathy for unfamiliar people. And it kindles that delicate, precious fire inside us––without it, we might as well be algorithms. Humankind does not live on bread alone, nor can our spirits long survive on a diet of reruns.

Permalink
June 18, 2024 at 8:29:58 PM GMT+2

Réseaux sociaux : la fabrique de l’hostilité politique ?https://theconversation.com/reseaux-sociaux-la-fabrique-de-lhostilite-politique-230458

  • Social Network
  • Disinformation War
  • Police State
  • Fakeness
  • Social Network
  • Disinformation War
  • Police State
  • Fakeness

Réseaux sociaux : la fabrique de l’hostilité politique ?

Publié: 17 juin 2024, 15:21 CEST

Depuis quelques années, les réseaux sociaux comme Facebook et X (anciennement Twitter) sont devenus la cible d’accusations nombreuses : facteurs de diffusion de « fake news » à grande échelle, instruments de déstabilisation des démocraties par la Russie et la Chine, machines à capturer notre attention pour la vendre à des marchands de toutes sortes, théâtres d’un ciblage publicitaire toujours plus personnalisé et manipulateur, etc. En atteste le succès de documentaires et d’essais sur le coût humain, jugé considérable, des réseaux sociaux, comme The Social Dilemma sur Netflix.

L’un de ces discours, en particulier, rend les plates-formes digitales et leurs algorithmes responsables de l’amplification de l’hostilité en ligne et de la polarisation politique dans la société. Avec les discussions en ligne anonymes, affirment certains, n’importe qui serait susceptible de devenir un troll, c’est-à-dire une personne agressive, cynique et dépourvue de compassion, ou de se « radicaliser ».

Des travaux récents en sciences sociales quantitatives et en psychologie scientifique permettent toutefois d’apporter quelques correctifs à ce récit, excessivement pessimiste.

L’importance du contexte sociopolitique et de la psychologie

Pour commencer, plusieurs études suggèrent que si les individus font régulièrement l’expérience de discussions sur des sujets politiques qui deviennent conflictuelles, cette incivilité est en partie liée à des facteurs psychologiques et socio-économiques qui préexistent aux plates-formes digitales.

Dans une étude interculturelle de grande envergure, nous avons interrogé plus de 15 000 personnes via des panels représentatifs dans trente nations très diverses (France, Irak, Thaïlande, Pakistan, etc.) sur leurs expériences des conversations sur Internet. Notre première découverte est que ce sont dans les pays les plus inégalitaires économiquement et les moins démocratiques que les individus sont le plus souvent l’objet d’invectives hostiles de la part de leurs concitoyens sur les réseaux (comme en Turquie ou au Brésil). Ce phénomène découle manifestement des frustrations générées par ces sociétés plus répressives des aspirations individuelles.

Notre étude montre en outre que les individus qui s’adonnent le plus à l’hostilité en ligne sont aussi ceux qui sont les plus disposés à la recherche de statut social par la prise de risque. Ce trait de personnalité correspond à une orientation vers la dominance, c’est-à-dire à chercher à soumettre les autres à sa volonté (y compris par l’intimidation). Dans nos données interculturelles, nous observons que les individus ayant ce type de traits dominants sont nombreux dans les pays inégalitaires et non démocratiques. Des analyses indépendantes montrent d’ailleurs que la dominance est un élément clé de la psychologie de la conflictualité politique, puisqu’elle prédit également davantage de partage de ‘fake news’ moquant ou insultant les opposants politiques sur Internet, et plus d’attrait pour le conflit politique hors ligne, notamment.

Répliquant une étude antérieure, nous trouvons par ailleurs que ces individus motivés par la recherche de statut par la prise de risque, qui admettent le plus se comporter de manière hostile sur Internet, sont aussi ceux qui sont plus susceptibles d’interagir de manière agressive ou toxique dans des discussions en face à face (la corrélation entre l’hostilité en ligne et hors ligne est forte, de l’ordre de β = 0,77).

En résumé, l’hostilité politique en ligne semble largement être le fruit de personnalités particulières, rendues agressives par les frustrations engendrées par des contextes sociaux inégalitaires, et activant notre tendance à voir le monde en termes de “nous” vs « eux ». Au plan politique, réduire les disparités de richesses entre groupes et rendre nos institutions plus démocratiques constituent des objectifs probablement incontournables si nous souhaitons faire advenir un Internet (et une société civile) plus harmonieux.

Les réseaux : prismes exagérant l’hostilité ambiante

Si notre étude replace l’hostilité politique en ligne dans un plus large contexte, elle ne nie pas tout rôle aux plates-formes dans la production de la polarisation politique pour autant.

Les réseaux sociaux permettent à un contenu d’être diffusé à l’identique à des millions de personnes (à l’inverse de la communication verbale, lieu de distorsions inévitables). À ce titre, ils peuvent mésinformer ou mettre en colère des millions de personnes à un très faible coût. Ceci est vrai que l’information fausse ou toxique soit créée intentionnellement pour générer des clics, ou qu’elle soit le fruit involontaire des biais politiques d’un groupe politique donné.

[Déjà plus de 120 000 abonnements aux newsletters The Conversation. Et vous ? Abonnez-vous aujourd’hui pour mieux comprendre les grands enjeux du monde.]

Si les échanges sur les réseaux sociaux manquent souvent de civilité, c’est également à cause de la possibilité qu’ils offrent d’échanger avec des étrangers anonymes, dépersonnalisés. Cette expérience unique à l’ère Internet réduit le sentiment de responsabilité personnelle, ainsi que l’empathie vis-à-vis d’interlocuteurs que nous ne voyons plus comme des personnes mais comme les membres interchangeables de « tribus » politiques.

Des analyses récentes rappellent par ailleurs que les réseaux sociaux – comme le journalisme, à bien des égards – opèrent moins comme le miroir que comme le prisme déformant de la diversité des opinions dans la société.

Les posts politiques indignés et potentiellement insultants sont souvent le fait de personnes plus déterminées à s’exprimer et radicales que la moyenne – que ce soit pour signaler leurs engagements, exprimer une colère, faire du prosélytisme, etc. Même lorsqu’ils représentent une assez faible proportion de la production écrite sur les réseaux, ces posts se trouvent promus par des algorithmes programmés pour mettre en avant les contenus capables d’attirer l’attention et de déclencher des réponses, dont les messages clivants font partie.

À contrario, la majorité des utilisateurs, plus modérée et moins péremptoire, est réticente à se lancer dans des discussions politiques qui récompensent rarement la bonne foi argumentative et qui dégénèrent souvent en « shitstorms » (c.-à-d., en déchaînements de haine).

Ces biais de sélection et de perception produisent l’impression trompeuse que les convictions radicales et hostiles sont à la fois plus répandues et tolérées moralement qu’elles ne le sont en réalité.

Quand l’exposition à la différence énerve

Ceci étant dit, l’usage des réseaux sociaux semble pouvoir contribuer à augmenter l’hostilité et la radicalité politiques selon un mécanisme au moins : celui de l’exposition à des versions caricaturales et agressives des positions politiques adverses, qui agacent.

Contrairement à une croyance répandue, la plupart de nos connexions virtuelles ne prennent typiquement pas vraiment la forme de « chambres d’écho », nous isolant dans des sas d’idées politiques totalement homogènes.

Bien que certains réseaux soient effectivement construits de cette manière (4Chan ou certains sub-Reddits), les plus larges plates-formes que sont Facebook (3 milliards d’utilisateurs) et X (550 millions) nous font typiquement défiler une certaine diversité d’opinions devant les yeux. Celle-ci est en tous cas fréquemment supérieure à celle de nos relations amicales : êtes-vous encore régulièrement en contact avec des copains de collège qui ont « viré Front national » ? Probablement pas, mais il est plus probable que vous lisiez leurs posts Facebook.

Cette exposition à l’altérité idéologique est désirable, en théorie, puisqu’elle devrait permettre de nous faire découvrir les angles morts de nos connaissances et convictions politiques, notre commune humanité, et donc nous rendre à la fois plus humbles et plus respectueux les uns des autres. Malheureusement, le mode sur lequel la plupart des gens expriment leurs convictions politiques – sur les réseaux comme à la machine à café – est assez dépourvu de nuance et de pédagogie. Il tend à réduire les positions adverses à des caricatures diabolisées, et cherche moins à persuader le camp d’en face qu’à galvaniser les personnes qui sont déjà d’accord avec soi, ou à se faire bien voir d’amis politiques.

Prenant appui sur des études expérimentales déployées sur Twitter et des interviews de militants démocrates et républicains menées avec son équipe, le sociologue Chris Bail nous avertit dans son livre Le prisme des réseaux sociaux. D’après lui, une exposition répétée à des contenus peu convaincants et moqueurs produits par nos ennemis politiques peut paradoxalement renforcer les partisans dans leurs positions et identités préexistantes, plutôt que de les rapprocher intellectuellement et émotionnellement les uns des autres.

Cependant, cette relation entre usage des réseaux sociaux et polarisation politique pourrait dépendre beaucoup du temps d’exposition et n’apparaît pas dans tous les échantillons étudiés. Ainsi, des études explorant les effets d’un arrêt de l’utilisation de Facebook et d’Instagram n’observent pas que l’utilisation de ces médias sociaux polarise de façon détectable les opinions politiques des utilisateurs.

Rappelons-nous toujours que les discours pointant des menaces pesant sur la société jouissent d’un avantage concurrentiel considérable sur le marché des idées et des conversations, en raison de leur attractivité pour nos esprits. Il convient donc d’approcher la question des liens entre réseaux sociaux, hostilité et polarisation politique avec nuance, en évitant les travers symétriques de l’optimisme béat et de la panique collective.

Permalink
June 17, 2024 at 10:18:17 PM GMT+2

Pentagon ran secret anti-vax campaign to incite fear of China vaccineshttps://www.reuters.com/investigates/special-report/usa-covid-propaganda/

  • Disinformation War
  • PsyOps
  • Social Network
  • Media Manipulation
  • Disinformation War
  • PsyOps
  • Social Network
  • Media Manipulation

Pentagon ran secret anti-vax campaign to undermine China during pandemic

The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk.

By CHRIS BING and JOEL SCHECTMAN Filed June 14, 2024, 9:45 a.m. GMT

At the height of the COVID-19 pandemic, the U.S. military launched a secret campaign to counter what it perceived as China’s growing influence in the Philippines, a nation hit especially hard by the deadly virus.

The clandestine operation has not been previously reported. It aimed to sow doubt about the safety and efficacy of vaccines and other life-saving aid that was being supplied by China, a Reuters investigation found. Through phony internet accounts meant to impersonate Filipinos, the military’s propaganda efforts morphed into an anti-vax campaign. Social media posts decried the quality of face masks, test kits and the first vaccine that would become available in the Philippines – China’s Sinovac inoculation.

Reuters identified at least 300 accounts on X, formerly Twitter, that matched descriptions shared by former U.S. military officials familiar with the Philippines operation. Almost all were created in the summer of 2020 and centered on the slogan #Chinaangvirus – Tagalog for China is the virus.

“COVID came from China and the VACCINE also came from China, don’t trust China!” one typical tweet from July 2020 read in Tagalog. The words were next to a photo of a syringe beside a Chinese flag and a soaring chart of infections. Another post read: “From China – PPE, Face Mask, Vaccine: FAKE. But the Coronavirus is real.”

After Reuters asked X about the accounts, the social media company removed the profiles, determining they were part of a coordinated bot campaign based on activity patterns and internal data.

The U.S. military’s anti-vax effort began in the spring of 2020 and expanded beyond Southeast Asia before it was terminated in mid-2021, Reuters determined. Tailoring the propaganda campaign to local audiences across Central Asia and the Middle East, the Pentagon used a combination of fake social media accounts on multiple platforms to spread fear of China’s vaccines among Muslims at a time when the virus was killing tens of thousands of people each day. A key part of the strategy: amplify the disputed contention that, because vaccines sometimes contain pork gelatin, China’s shots could be considered forbidden under Islamic law.

The military program started under former President Donald Trump and continued months into Joe Biden’s presidency, Reuters found – even after alarmed social media executives warned the new administration that the Pentagon had been trafficking in COVID misinformation. The Biden White House issued an edict in spring 2021 banning the anti-vax effort, which also disparaged vaccines produced by other rivals, and the Pentagon initiated an internal review, Reuters found.

“I don’t think it’s defensible. I’m extremely dismayed, disappointed and disillusioned to hear that the U.S. government would do that.”

Daniel Lucey, infectious disease specialist at Dartmouth’s Geisel School of Medicine.

The U.S. military is prohibited from targeting Americans with propaganda, and Reuters found no evidence the Pentagon’s influence operation did so.

Spokespeople for Trump and Biden did not respond to requests for comment about the clandestine program.

A senior Defense Department official acknowledged the U.S. military engaged in secret propaganda to disparage China’s vaccine in the developing world, but the official declined to provide details.

A Pentagon spokeswoman said the U.S. military “uses a variety of platforms, including social media, to counter those malign influence attacks aimed at the U.S., allies, and partners.” She also noted that China had started a “disinformation campaign to falsely blame the United States for the spread of COVID-19.”

In an email, the Chinese Ministry of Foreign Affairs said that it has long maintained the U.S. government manipulates social media and spreads misinformation.

Manila’s embassy in Washington did not respond to Reuters inquiries, including whether it had been aware of the Pentagon operation. A spokesperson for the Philippines Department of Health, however, said the “findings by Reuters deserve to be investigated and heard by the appropriate authorities of the involved countries.” Some aide workers in the Philippines, when told of the U.S. military propaganda effort by Reuters, expressed outrage.

Briefed on the Pentagon’s secret anti-vax campaign by Reuters, some American public health experts also condemned the program, saying it put civilians in jeopardy for potential geopolitical gain. An operation meant to win hearts and minds endangered lives, they said.

“I don’t think it’s defensible,” said Daniel Lucey, an infectious disease specialist at Dartmouth’s Geisel School of Medicine. “I’m extremely dismayed, disappointed and disillusioned to hear that the U.S. government would do that,” said Lucey, a former military physician who assisted in the response to the 2001 anthrax attacks.

The effort to stoke fear about Chinese inoculations risked undermining overall public trust in government health initiatives, including U.S.-made vaccines that became available later, Lucey and others said. Although the Chinese vaccines were found to be less effective than the American-led shots by Pfizer and Moderna, all were approved by the World Health Organization. Sinovac did not respond to a Reuters request for comment.

Academic research published recently has shown that, when individuals develop skepticism toward a single vaccine, those doubts often lead to uncertainty about other inoculations. Lucey and other health experts say they saw such a scenario play out in Pakistan, where the Central Intelligence Agency used a fake hepatitis vaccination program in Abbottabad as cover to hunt for Osama bin Laden, the terrorist mastermind behind the attacks of September 11, 2001. Discovery of the ruse led to a backlash against an unrelated polio vaccination campaign, including attacks on healthcare workers, contributing to the reemergence of the deadly disease in the country.

“It should have been in our interest to get as much vaccine in people’s arms as possible,” said Greg Treverton, former chairman of the U.S. National Intelligence Council, which coordinates the analysis and strategy of Washington’s many spy agencies. What the Pentagon did, Treverton said, “crosses a line.”

‘We were desperate’

Together, the phony accounts used by the military had tens of thousands of followers during the program. Reuters could not determine how widely the anti-vax material and other Pentagon-planted disinformation was viewed, or to what extent the posts may have caused COVID deaths by dissuading people from getting vaccinated.

In the wake of the U.S. propaganda efforts, however, then-Philippines President Rodrigo Duterte had grown so dismayed by how few Filipinos were willing to be inoculated that he threatened to arrest people who refused vaccinations.

“You choose, vaccine or I will have you jailed,” a masked Duterte said in a televised address in June 2021. “There is a crisis in this country … I’m just exasperated by Filipinos not heeding the government.”

When he addressed the vaccination issue, the Philippines had among the worst inoculation rates in Southeast Asia. Only 2.1 million of its 114 million citizens were fully vaccinated – far short of the government’s target of 70 million. By the time Duterte spoke, COVID cases exceeded 1.3 million, and almost 24,000 Filipinos had died from the virus. The difficulty in vaccinating the population contributed to the worst death rate in the region.

A spokesperson for Duterte did not make the former president available for an interview.

Some Filipino healthcare professionals and former officials contacted by Reuters were shocked by the U.S. anti-vax effort, which they say exploited an already vulnerable citizenry. Public concerns about a Dengue fever vaccine, rolled out in the Philippines in 2016, had led to broad skepticism toward inoculations overall, said Lulu Bravo, executive director of the Philippine Foundation for Vaccination. The Pentagon campaign preyed on those fears.

“Why did you do it when people were dying? We were desperate,” said Dr. Nina Castillo-Carandang, a former adviser to the World Health Organization and Philippines government during the pandemic. “We don’t have our own vaccine capacity,” she noted, and the U.S. propaganda effort “contributed even more salt into the wound.”

The campaign also reinforced what one former health secretary called a longstanding suspicion of China, most recently because of aggressive behavior by Beijing in disputed areas of the South China Sea. Filipinos were unwilling to trust China’s Sinovac, which first became available in the country in March 2021, said Esperanza Cabral, who served as health secretary under President Gloria Macapagal Arroyo. Cabral said she had been unaware of the U.S. military’s secret operation.

“I’m sure that there are lots of people who died from COVID who did not need to die from COVID,” she said.

To implement the anti-vax campaign, the Defense Department overrode strong objections from top U.S. diplomats in Southeast Asia at the time, Reuters found. Sources involved in its planning and execution say the Pentagon, which ran the program through the military’s psychological operations center in Tampa, Florida, disregarded the collateral impact that such propaganda may have on innocent Filipinos.

“We weren’t looking at this from a public health perspective,” said a senior military officer involved in the program. “We were looking at how we could drag China through the mud.”

A new disinformation war

In uncovering the secret U.S. military operation, Reuters interviewed more than two dozen current and former U.S officials, military contractors, social media analysts and academic researchers. Reporters also reviewed Facebook, X and Instagram posts, technical data and documents about a set of fake social media accounts used by the U.S. military. Some were active for more than five years.

Clandestine psychological operations are among the government’s most highly sensitive programs. Knowledge of their existence is limited to a small group of people within U.S. intelligence and military agencies. Such programs are treated with special caution because their exposure could damage foreign alliances or escalate conflict with rivals.

Over the last decade, some U.S. national security officials have pushed for a return to the kind of aggressive clandestine propaganda operations against rivals that the United States’ wielded during the Cold War. Following the 2016 U.S. presidential election, in which Russia used a combination of hacks and leaks to influence voters, the calls to fight back grew louder inside Washington.

In 2019, Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, Reuters reported in March. As part of that effort, a small group of operatives used bogus online identities to spread disparaging narratives about Xi Jinping’s government.

COVID-19 galvanized the drive to wage psychological operations against China. One former senior Pentagon leader described the pandemic as a “bolt of energy” that finally ignited the long delayed counteroffensive against China’s influence war.

The Pentagon’s anti-vax propaganda came in response to China’s own efforts to spread false information about the origins of COVID. The virus first emerged in China in late 2019. But in March 2020, Chinese government officials claimed without evidence that the virus may have been first brought to China by an American service member who participated in an international military sports competition in Wuhan the previous year. Chinese officials also suggested that the virus may have originated in a U.S. Army research facility at Fort Detrick, Maryland. There’s no evidence for that assertion.

Mirroring Beijing’s public statements, Chinese intelligence operatives set up networks of fake social media accounts to promote the Fort Detrick conspiracy, according to a U.S. Justice Department complaint.

China’s messaging got Washington’s attention. Trump subsequently coined the term “China virus” as a response to Beijing’s accusation that the U.S. military exported COVID to Wuhan.

“That was false. And rather than having an argument, I said, ‘I have to call it where it came from,’” Trump said in a March 2020 news conference. “It did come from China.”

China’s Foreign Ministry said in an email that it opposed “actions to politicize the origins question and stigmatize China.” The ministry had no comment about the Justice Department’s complaint.

Beijing didn’t limit its global influence efforts to propaganda. It announced an ambitious COVID assistance program, which included sending masks, ventilators and its own vaccines – still being tested at the time – to struggling countries. In May 2020, Xi announced that the vaccine China was developing would be made available as a “global public good,” and would ensure “vaccine accessibility and affordability in developing countries.” Sinovac was the primary vaccine available in the Philippines for about a year until U.S.-made vaccines became more widely available there in early 2022.

Washington’s plan, called Operation Warp Speed, was different. It favored inoculating Americans first, and it placed no restrictions on what pharmaceutical companies could charge developing countries for the remaining vaccines not used by the United States. The deal allowed the companies to “play hardball” with developing countries, forcing them to accept high prices, said Lawrence Gostin, a professor of medicine at Georgetown University who has worked with the World Health Organization.

The deal “sucked most of the supply out of the global market,” Gostin said. “The United States took a very determined America First approach.”

To Washington’s alarm, China’s offers of assistance were tilting the geopolitical playing field across the developing world, including in the Philippines, where the government faced upwards of 100,000 infections in the early months of the pandemic.

The U.S. relationship with Manila had grown tense after the 2016 election of the bombastic Duterte. A staunch critic of the United States, he had threatened to cancel a key pact that allows the U.S. military to maintain legal jurisdiction over American troops stationed in the country.

Duterte said in a July 2020 speech he had made “a plea” to Xi that the Philippines be at the front of the line as China rolled out vaccines. He vowed in the same speech that the Philippines would no longer challenge Beijing’s aggressive expansion in the South China Sea, upending a key security understanding Manila had long held with Washington.

“China is claiming it. We are claiming it. China has the arms, we do not have it.” Duterte said. “So, it is simple as that.”

Days later, China’s foreign minister announced Beijing would grant Duterte’s plea for priority access to the vaccine, as part of a “new highlight in bilateral relations.”

China’s growing influence fueled efforts by U.S. military leaders to launch the secret propaganda operation Reuters uncovered.

“We didn’t do a good job sharing vaccines with partners,” a senior U.S. military officer directly involved in the campaign in Southeast Asia told Reuters. “So what was left to us was to throw shade on China’s.”

Military trumped diplomats

U.S. military leaders feared that China’s COVID diplomacy and propaganda could draw other Southeast Asian countries, such as Cambodia and Malaysia, closer to Beijing, furthering its regional ambitions.

A senior U.S. military commander responsible for Southeast Asia, Special Operations Command Pacific General Jonathan Braga, pressed his bosses in Washington to fight back in the so-called information space, according to three former Pentagon officials.

The commander initially wanted to punch back at Beijing in Southeast Asia. The goal: to ensure the region understood the origin of COVID while promoting skepticism toward what were then still-untested vaccines offered by a country that they said had lied continually since the start of the pandemic.

A spokesperson for Special Operations Command declined to comment.

At least six senior State Department officials responsible for the region objected to this approach. A health crisis was the wrong time to instill fear or anger through a psychological operation, or psyop, they argued during Zoom calls with the Pentagon.

“We’re stooping lower than the Chinese and we should not be doing that,” said a former senior State Department official for the region who fought against the military operation.

While the Pentagon saw Washington’s rapidly diminishing influence in the Philippines as a call to action, the withering partnership led American diplomats to plead for caution.

“The relationship is hanging from a thread,” another former senior U.S. diplomat recounted. “Is this the moment you want to do a psyop in the Philippines? Is it worth the risk?”

In the past, such opposition from the State Department might have proved fatal to the program. Previously in peacetime, the Pentagon needed approval of embassy officials before conducting psychological operations in a country, often hamstringing commanders seeking to quickly respond to Beijing’s messaging, three former Pentagon officials told Reuters.

But in 2019, before COVID surfaced in full force, then-Secretary of Defense Mark Esper signed a secret order that later paved the way for the launch of the U.S. military propaganda campaign. The order elevated the Pentagon’s competition with China and Russia to the priority of active combat, enabling commanders to sidestep the State Department when conducting psyops against those adversaries. The Pentagon spending bill passed by Congress that year also explicitly authorized the military to conduct clandestine influence operations against other countries, even “outside of areas of active hostilities.”

Esper, through a spokesperson, declined to comment. A State Department spokesperson referred questions to the Pentagon.

U.S. propaganda machine

In spring 2020, special-ops commander Braga turned to a cadre of psychological-warfare soldiers and contractors in Tampa to counter Beijing’s COVID efforts. Colleagues say Braga was a longtime advocate of increasing the use of propaganda operations in global competition. In trailers and squat buildings at a facility on Tampa’s MacDill Air Force Base, U.S. military personnel and contractors would use anonymous accounts on X, Facebook and other social media to spread what became an anti-vax message. The facility remains the Pentagon’s clandestine propaganda factory.

Psychological warfare has played a role in U.S. military operations for more than a hundred years, although it has changed in style and substance over time. So-called psyopers were best known following World War II for their supporting role in combat missions across Vietnam, Korea and Kuwait, often dropping leaflets to confuse the enemy or encourage their surrender.

After the al Qaeda attacks of 2001, the United States was fighting a borderless, shadowy enemy, and the Pentagon began to wage a more ambitious kind of psychological combat previously associated only with the CIA. The Pentagon set up front news outlets, paid off prominent local figures, and sometimes funded television soap operas in order to turn local populations against militant groups or Iranian-backed militias, former national security officials told Reuters.

Unlike earlier psyop missions, which sought specific tactical advantage on the battlefield, the post-9/11 operations hoped to create broader change in public opinion across entire regions.

By 2010, the military began using social media tools, leveraging phony accounts to spread messages of sympathetic local voices – themselves often secretly paid by the United States government. As time passed, a growing web of military and intelligence contractors built online news websites to pump U.S.-approved narratives into foreign countries. Today, the military employs a sprawling ecosystem of social media influencers, front groups and covertly placed digital advertisements to influence overseas audiences, according to current and former military officials.

China’s efforts to gain geopolitical clout from the pandemic gave Braga justification to launch the propaganda campaign that Reuters uncovered, sources said.

Pork in the vaccine?

By summer 2020, the military’s propaganda campaign moved into new territory and darker messaging, ultimately drawing the attention of social media executives.

In regions beyond Southeast Asia, senior officers in the U.S. Central Command, which oversees military operations across the Middle East and Central Asia, launched their own version of the COVID psyop, three former military officials told Reuters.

Although the Chinese vaccines were still months from release, controversy roiled the Muslim world over whether the vaccines contained pork gelatin and could be considered “haram,” or forbidden under Islamic law. Sinovac has said that the vaccine was “manufactured free of porcine materials.” Many Islamic religious authorities maintained that even if the vaccines did contain pork gelatin, they were still permissible since the treatments were being used to save human life.

The Pentagon campaign sought to intensify fears about injecting a pig derivative. As part of an internal investigation at X, the social media company used IP addresses and browser data to identify more than 150 phony accounts that were operated from Tampa by U.S. Central Command and its contractors, according to an internal X document reviewed by Reuters.

“Can you trust China, which tries to hide that its vaccine contains pork gelatin and distributes it in Central Asia and other Muslim countries where many people consider such a drug haram?” read an April 2021 tweet sent from a military-controlled account identified by X.

The Pentagon also covertly spread its messages on Facebook and Instagram, alarming executives at parent company Meta who had long been tracking the military accounts, according to former military officials.

One military-created meme targeting Central Asia showed a pig made out of syringes, according to two people who viewed the image. Reuters found similar posts that traced back to U.S. Central Command. One shows a Chinese flag as a curtain separating Muslim women in hijabs and pigs stuck with vaccine syringes. In the center is a man with syringes; on his back is the word “China.” It targeted Central Asia, including Kazakhstan, Kyrgyzstan and Uzbekistan, a country that distributed tens of millions of doses of China’s vaccines and participated in human trials. Translated into English, the X post reads: “China distributes a vaccine made of pork gelatin.”

Facebook executives had first approached the Pentagon in the summer of 2020, warning the military that Facebook workers had easily identified the military’s phony accounts, according to three former U.S. officials and another person familiar with the matter. The government, Facebook argued, was violating Facebook’s policies by operating the bogus accounts and by spreading COVID misinformation.

The military argued that many of its fake accounts were being used for counterterrorism and asked Facebook not to take down the content, according to two people familiar with the exchange. The Pentagon pledged to stop spreading COVID-related propaganda, and some of the accounts continued to remain active on Facebook.

Nonetheless, the anti-vax campaign continued into 2021 as Biden took office.

Angered that military officials had ignored their warning, Facebook officials arranged a Zoom meeting with Biden’s new National Security Council shortly after the inauguration, Reuters learned. The discussion quickly became tense.

“It was terrible,” said a senior administration official describing the reaction after learning of the campaign’s pig-related posts. “I was shocked. The administration was pro-vaccine and our concern was this could affect vaccine hesitancy, especially in developing countries.”

By spring 2021, the National Security Council ordered the military to stop all anti-vaccine messaging. “We were told we needed to be pro-vaccine, pro all vaccines,” said a former senior military officer who helped oversee the program. Even so, Reuters found some anti-vax posts that continued through April and other deceptive COVID-related messaging that extended into that summer. Reuters could not determine why the campaign didn’t end immediately with the NSC’s order. In response to questions from Reuters, the NSC declined to comment.

The senior Defense Department official said that those complaints led to an internal review in late 2021, which uncovered the anti-vaccine operation. The probe also turned up other social and political messaging that was “many, many leagues away” from any acceptable military objective. The official would not elaborate.

The review intensified the following year, the official said, after a group of academic researchers at Stanford University flagged some of the same accounts as pro-Western bots in a public report. The high-level Pentagon review was first reported by the Washington Post. which also reported that the military used fake social media accounts to counter China’s message that COVID came from the United States. But the Post report did not reveal that the program evolved into the anti-vax propaganda campaign uncovered by Reuters.

The senior defense official said the Pentagon has rescinded parts of Esper’s 2019 order that allowed military commanders to bypass the approval of U.S. ambassadors when waging psychological operations. The rules now mandate that military commanders work closely with U.S. diplomats in the country where they seek to have an impact. The policy also restricts psychological operations aimed at “broad population messaging,” such as those used to promote vaccine hesitancy during COVID.

The Pentagon’s audit concluded that the military’s primary contractor handling the campaign, General Dynamics IT, had employed sloppy tradecraft, taking inadequate steps to hide the origin of the fake accounts, said a person with direct knowledge of the review. The review also found that military leaders didn’t maintain enough control over its psyop contractors, the person said.

A spokesperson for General Dynamics IT declined to comment.

Nevertheless, the Pentagon’s clandestine propaganda efforts are set to continue. In an unclassified strategy document last year, top Pentagon generals wrote that the U.S. military could undermine adversaries such as China and Russia using “disinformation spread across social media, false narratives disguised as news, and similar subversive activities [to] weaken societal trust by undermining the foundations of government.”

And in February, the contractor that worked on the anti-vax campaign – General Dynamics IT – won a $493 million contract. Its mission: to continue providing clandestine influence services for the military.

Permalink
June 15, 2024 at 1:13:18 PM GMT+2

How a Samsung Washing Machine Chime Triggered a YouTube Copyright Fiasco | WIREDhttps://www.wired.com/story/youtube-content-id-samsung-washing-machine-chime-demonetize/

  • Copyright
  • Big Corpo
  • Copyright
  • Big Corpo

How a Samsung Washing Machine Chime Triggered a YouTube Copyright Fiasco

When YouTube’s Content ID system goes wrong, it goes very, very wrong.

Ashley Belanger, Ars Technica Culture News Jun 1, 2024 11:30 AM

YouTube’s Content ID system—which automatically detects content registered by rights holders—is “completely fucking broken,” a YouTuber called “Albino” declared in a rant on the social media site X that has been viewed more than 950,000 times.

Albino, who is also a popular Twitch streamer, complained that his YouTube video playing through Fallout was demonetized because a Samsung washing machine randomly chimed to signal a laundry cycle had finished while he was streaming.

Apparently, YouTube had automatically scanned Albino's video and detected the washing machine chime as a song called “Done”—which Albino quickly saw was uploaded to YouTube by a musician known as Audego nine years ago.

But when Albino hit Play on Audego's song, the only thing that he heard was a 30-second clip of the washing machine chime. To Albino it was obvious that Audego didn't have any rights to the jingle, which Dexerto reported actually comes from the song "Die Forelle" (“The Trout”) from Austrian composer Franz Schubert.

The song was composed in 1817 and is in the public domain. Samsung has used it to signal the end of a wash cycle for years, sparking debate over whether it's the catchiest washing machine song and inspiring at least one violinist to perform a duet with her machine. It's been a source of delight for many Samsung customers, but for Albino, hearing the jingle appropriated on YouTube only inspired ire.

"A guy recorded his fucking washing machine and uploaded it to YouTube with Content ID," Albino said in a video on X. "And now I'm getting copyright claims" while "my money" is "going into the toilet and being given to this fucking slime."

Albino suggested that YouTube had potentially allowed Audego to make invalid copyright claims for years without detecting the seemingly obvious abuse.

"How is this still here?" Albino asked. "It took me one Google search to figure this out," and "now I'm sharing revenue with this? That's insane."

At first, Team YouTube gave Albino a boilerplate response on X, writing, "We understand how important it is for you. From your vid, it looks like you've recently submitted a dispute. When you dispute a Content ID claim, the person who claimed your video (the claimant) is notified and they have 30 days to respond."

Albino expressed deep frustration at YouTube's response, given how "egregious" he considered the copyright abuse to be.

"Just wait for the person blatantly stealing copyrighted material to respond," Albino responded to YouTube. "Ah, OK, yes, I'm sure they did this in good faith and will make the correct call, though it would be a shame if they simply clicked ‘reject dispute,’ took all the ad revenue money and forced me to risk having my channel terminated to appeal it!! XDxXDdxD!! Thanks Team YouTube!"

Soon after, YouTube confirmed on X that Audego's copyright claim was indeed invalid. The social platform ultimately released the claim and told Albino to expect the changes to be reflected on his channel within two business days.

Ars could not immediately reach YouTube or Albino for comment.

Widespread Abuse of Content ID Continues

YouTubers have complained about abuse of Content ID for years. Techdirt's Timothy Geigner agreed with Albino's assessment that the YouTube system is "hopelessly broken," noting that sometimes content is flagged by mistake. But just as easily, bad actors can abuse the system to claim "content that simply isn’t theirs" and seize sometimes as much as millions in ad revenue.

In 2021, YouTube announced that it had invested "hundreds of millions of dollars" to create content management tools, of which Content ID quickly emerged as the platform's go-to solution to detect and remove copyrighted materials.

At that time, YouTube claimed that Content ID was created as a "solution for those with the most complex rights management needs," like movie studios and record labels whose movie clips and songs are most commonly uploaded by YouTube users. YouTube warned that without Content ID, "rights holders could have their rights impaired and lawful expression could be inappropriately impacted."

Since its rollout, more than 99 percent of copyright actions on YouTube have consistently been triggered automatically through Content ID.

And just as consistently, YouTube has seen widespread abuse of Content ID, terminating "tens of thousands of accounts each year that attempt to abuse our copyright tools," YouTube said. YouTube also acknowledged in 2021 that "just one invalid reference file in Content ID can impact thousands of videos and users, stripping them of monetization or blocking them altogether."

To help rights holders and creators track how much copyrighted content is removed from the platform, YouTube started releasing biannual transparency reports in 2021. The Electronic Frontier Foundation, a nonprofit digital rights group, applauded YouTube's "move towards transparency" while criticizing YouTube's "claim that YouTube is adequately protecting its creators."

"That rings hollow," the EFF reported in 2021, noting that "huge conglomerates have consistently pushed for more and more restrictions on the use of copyrighted material, at the expense of fair use and, as a result, free expression." As the EFF saw it then, YouTube's Content ID system mainly served to appease record labels and movie studios, while creators felt "pressured" not to dispute Content ID claims out of "fear" that their channel might be removed if YouTube consistently sided with rights holders.

According to YouTube, "it’s impossible for matching technology to take into account complex legal considerations like fair use or fair dealing," and that impossibility seemingly ensures that creators bear the brunt of automated actions even when it's fair to use copyrighted materials.

At that time, YouTube described Content ID as "an entirely new revenue stream from ad-supported, user generated content" for rights holders, who made more than $5.5 billion from Content ID matches by December 2020. More recently, YouTube reported that figure climbed above $9 billion, as of December 2022. With so much money at play, it's easy to see how the system could be seen as disproportionately favoring rights holders, while creators continue to suffer from income diverted by the automated system.

Despite YouTubers' ongoing frustrations, not much has changed with YouTube's Content ID system over the years. The language used in YouTube's most recent transparency report is largely a direct copy of the original report from 2021.

And while YouTube claims that the Content ID match technology should be "continually" adapted to sustain a "balanced ecosystem," the few most recent updates YouTube announced in 2022 didn't seem to do much to help creators dispute invalid claims.

"We’ve heard the Content ID Dispute process is top of mind for many of you," YouTube wrote in 2022. "You've shared that the process can take too long and can have long-term impact on your channel, specifically when claims result in viewing restrictions or monetization impact."

To address this, YouTube did not expedite the dispute process, which still allows up to 30 days for rights holders to respond. Instead, it expedited the appeals process, which happens after a rights holder rejects a disputed claim and arguably is the moment when the YouTuber's account is most in danger of being terminated.

"Now, the claimant will have 7 days instead of 30 to review the appeal before deciding whether to request a takedown of the video, release the claim, or let it expire," YouTube wrote in 2022. "We hope shortening the timespan of the appeals process helps you get claims resolved much faster!"

This update would only help YouTubers intent on disputing claims, like Albino was, but not the majority of YouTubers, whom the EFF reported were seemingly so intimidated by disputing Content ID claims that they more commonly just accepted "whatever punishment the system has levied against them." The EFF summarized the predicament that many YouTubers remain stuck in today:

There is a terrible, circular logic that traps creators on YouTube. They cannot afford to dispute Content ID matches because that could lead to DMCA notices. They cannot afford DMCA notices because those lead to copyright strikes. They cannot afford copyright strikes because that could lead to a loss of their account. They cannot afford to lose their account because they cannot afford to lose access to YouTube’s giant audience. And they cannot afford to lose access to that audience because they cannot count on making money from YouTube’s ads alone, partially because Content ID often diverts advertising money to rights holders when there is Content ID match. Which they cannot afford to dispute.

For Albino, who said he has fought back against many Content ID claims, the Samsung washing machine chime triggering demonetization seemed to be the final straw, breaking his patience with YouTube's dispute process.

"It's completely out of hand," Albino wrote on X.

Katharine Trendacosta, a YouTube researcher and the EFF's director of policy and advocacy, agreed with Albino, telling Ars that YouTube's Content ID system has not gotten any better over the years: “It's worse, and it's intentionally opaque and made to be incredibly difficult to navigate" for creators.

"I don't know any YouTube creator who's happy with the way Content ID works," Trendacosta told Ars.

But while many people think that YouTube's system isn't great, Trendacosta also said that she "can't think of a way to build the match technology" to improve it, because "machines cannot tell context." Perhaps if YouTube's matching technology triggered a human review each time, "that might be tenable," but "they would have to hire so many more people to do it."

What YouTube could be doing is updating its policies to make the dispute process less intimidating to content creators, though, Trendacosta told Ars. Right now, the bigger problem for creators, Trendacosta said her research has shown, is not how long it takes for YouTube to work out the dispute process but "the way YouTube phrases the dispute process to discourage you from disputing."

"The system is so discouraging," Trendacosta told Ars, with YouTube warning YouTubers that initiating a dispute could result in a copyright strike that terminates their accounts. "What it ends up doing is making them go, 'You know what, I'll eat it, whatever.'"

YouTube, which has previously dismissed complaints about the Content ID tool by saying "no system is perfect," did not respond to Ars' request for comment on whether any updates to the tool might be coming that might benefit creators. Instead, YouTube's plan seems to be to commiserate with users who likely can't afford to leave the platform over their concerns.

"Totally understand your frustration," Team YouTube told Albino on X.

This story originally appeared on Ars Technica.

Permalink
June 13, 2024 at 3:54:48 PM GMT+2

Amazon buys nuclear-powered data centre from Talenhttps://www.neimagazine.com/news/amazon-buys-nuclear-powered-data-centre-from-talen-11597723/

  • Big Corpo
  • Artificial intelligence
  • AWS
  • Nuclear
  • Big Corpo
  • Artificial intelligence
  • AWS
  • Nuclear

Amazon buys nuclear-powered data centre from Talen

US-based Talen Energy Corporation has sold its Cumulus data centre campus in Pennsylvania to Amazon subsidiary Amazon Web Services (AWS) for $650m. This includes a long-term agreement to provide power from Talen's Susquehanna NPP. The 2,500 MWe adjacent Susquehanna Steam Electric Station currently supplies power to the data centre.

13 March 2024

The $650m will be paid in stages – $350m on closing and $300m to be released on the attainment of development milestones later this year. Talen will also receive additional revenue from AWS related to sales of Susquehanna's energy to the grid.

“We believe this is a transformative transaction with long term benefits,” said Talen President & CEO Mark “Mac” McFarland, in a call with investors and media. As power demand continues to rise worldwide, “data centres are at the heart of that growth,” he noted.

Texas-based Talen is the majority owner and operator of the Susquehanna plant with 90% owned and operated by Talen subsidiary Susquehanna Nuclear. Allegheny Electric owns the other 10%. The plant’s two General Electric boiling water reactors began operation in 1983 and are licensed to operate until 2042 and 2044. In 2022, Talen filed for Chapter 11 bankruptcy as part of a financial restructuring, exiting bankruptcy in 2023. The transaction with AWS is expected to boost to its cash flow. After paying off debts, interest and other costs, Talen expects net proceeds of $361m from the deal.

The Cumulus campus is directly connected to the NPP. The data centre's four substations have a total potential 960 MW of redundant capacity. This includes 200 MW currently associated with the Nautilus cryptocurrency facility, in which Talen will retain its 75% interest. A further 240 MW of redundant capacity for data centres is expected to be ready this year. The campus has a "robust and redundant" fibre network.

According to Talen Energy’s investor presentation, it will supply fixed-price nuclear power to AWS’s new data centre as it is built. AWS has minimum contractual power commitments increasing in 120 MW increments over several years. AWS has a one-time option to cap commitments at 480 MW and two 10-year extension options tied to nuclear licence renewals.

"Power demand is growing for the first time in years, and AI and data centres are at the heart of that growth," McFarland said. "Data from the International Energy Agency suggests that energy demand from data centres, AI and cryptocurrencies could more than double over the next three years."

He added that the transaction will benefit the wider community by creating jobs and catalysing economic development as well as strengthening the Susquehanna plant itself as a major employer and significant taxpayer.

Permalink
June 12, 2024 at 9:31:32 PM GMT+2

En Chine, Internet est en train de disparaître | Le Grand Continenthttps://legrandcontinent.eu/fr/2024/06/02/en-chine-internet-est-en-train-de-disparaitre/

  • Datafication
  • Cultural Downfall
  • Data durability
  • Datafication
  • Cultural Downfall
  • Data durability

En Chine, Internet est en train de disparaître

« Si Internet est l'avenir de la civilisation, alors notre génération n’aura pas d'histoire — parce qu'Internet n'aura laissé de nous aucune trace. » On a longtemps cru qu’Internet serait la plus puissante des bibliothèques. La mémoire exhaustive de l’humanité. Le web chinois, dont les contenus sont désormais davantage produits et consommés depuis des téléphones, démontre le contraire : Internet n’archive rien. He Jiayan dissèque une révolution aux conséquences aussi violentes que souterraines.

2 juin 2024 • Numérique Auteur Le Grand Continent

« Nous avons longtemps conçu Internet comme une mémoire — nous ne savions pas que c’était celle d’un poisson rouge. » En une décennie, alors même qu’il devenait de plus en plus universel par son nombre d’utilisateurs, Internet a perdu plus d’un tiers de son contenu.

Pour la Chine, cet effondrement archivistique est encore plus important. He Jiayan, journaliste chinois spécialisé dans le web et le secteur des nouvelles technologies, travaille à partir d’archives informatiques datant d’avant l’Internet mobile. Il montre que le durcissement politique chinois est loin d’être le premier facteur de la disparition de pans entiers du web. L’autocensure et la course à la rentabilité ont conduit souterrainement à la disparition d’une quantité énorme d’informations des serveurs. À l’heure où l’Internet mobile semble être en train de remplacer le premier âge d’Internet, cette disparition semble peu visible — elle est pour autant tout à fait réelle.

Au-delà de la dimension technique, Jiayan alerte sur la disparition d’une mémoire commune constitutive d’une culture populaire encore vivante, confiée à des serveurs qu’on croyait immortels — et qui menace aujourd’hui de s’effondrer. Pour toute une génération qui a immatériellement construit sa vie dans l’Internet des vingt premières années du XXIe siècle, le risque très concret est de se retrouver avec deux décennies sans mémoire.

Commençons par une petite expérience.

He Jiayan

Si l’on cherche « Jack Ma » sur Baidu et qu’on fixe une période de référence courant de 1998 à 2005, combien d’informations apparaîtront ? Plutôt 100 millions, 10 millions ou 1 million ?

J’ai posé la question à plusieurs groupes et l’opinion majoritaire considère toujours que l’ordre de grandeur se situerait entre quelques millions et quelques dizaines de millions. Après tout, Internet est si vaste. Jack Ma, l’ancien entrepreneur haut en couleur, a dû y laisser beaucoup de traces.

En réalité, pour une recherche Baidu sélectionnant la plage de dates « 22 mai 1998 à 22 mai 2005 », le total des résultats contenant l’expression « Jack Ma » s’élève à 1.

Et ce seul résultat est lui-même faux : en cliquant dessus, on constate que l’article a en fait été publié en 2021, c’est-à-dire en dehors de la période délimitée ci-dessus. S’il apparaît dans notre recherche, ce n’est que de manière inexplicable.

En d’autres termes, si l’on veut connaître les expériences de Jack Ma, ses relations, les discussions à son sujet, ses discours ou l’histoire de son entreprise Alibaba pendant cette période, la quantité d’informations brutes valables qu’on obtient sur Internet est nulle.

Pour une recherche Baidu sélectionnant la plage de dates « 22 mai 1998 à 22 mai 2005 », le total des résultats contenant l’expression « Jack Ma » s’élève à 1.

He Jiayan

Peut-être est-ce un problème de Baidu ? Peut-être qu’en utilisant Bing ou Google, la recherche est davantage probante ?

J’ai essayé : ces deux sites affichent bien des résultats valides — un peu plus que Baidu — mais seulement une petite dizaine. Il y a aussi davantage de résultats invalides qui n’entrent pas dans la période sélectionnée — probablement pour une raison purement technique.

On peut alors se demander si ce n’est pas parce que Jack Ma est controversé en Chine que ces résultats ne sont pas consultables.

Ce serait vrai si c’était seulement le cas de Jack Ma. Mais j’ai aussi fait des recherches pour la même période sur Ma Huateng, Lei Jun, Ren Zhengfei, et même Luo Yonghao et Sister Fu Rong — qui étaient à l’époque des célébrités sur Internet — ou encore Jay Chou, Li Yuchun — qui étaient alors des stars de la musique. Les résultats sont du même ordre.

He Jiayan cite des hommes d’affaires chinois célèbres faisant partie des BATX, l’équivalent des GAFA chinois : Jack Ma est le fameux fondateur d’Alibaba, disparu soudainement est désormais exilé hors de Chine ; Ren Zhengfei est le fondateur et PDG de Huawei Technologies ; Ma Huateng, celui de Tencent Holdings ; et Lei Jun, le fondateur de Xiaomi. Lyo Yonghao et Sister Fu Rong sont des blogueurs célèbres en Chine dans les années 2000.

Dans le cas de la recherche sur Lei Jun, après avoir testé différents sites web, différents noms de personnes et différentes périodes, j’ai découvert un phénomène étonnant. Presque toutes les archives des sites web chinois qui étaient populaires à l’époque — tels que NetEase, Sohu, SMTH BBS, Xizi Hutong, Keyhole Forum, Tianya Club, RenRen Network, Sina Blogs, Baidu Tieba — ont disparu. Un grand nombre de contenus sur des sites web personnels ont également disparu sur une période ancienne, et la plupart des sites web ont même disparu pour toutes les périodes. La seule exception est Sina.com, où l’on peut encore trouver des informations datant de plus de dix ans — mais en nombre assez limité. Plus de 99,9999 % du contenu a disparu.

Nous avons longtemps conçu Internet comme une mémoire — nous ne savions pas que c’était celle d’un poisson rouge.

He Jiayan

Nous sommes en train de passer à côté d’un problème préoccupant : l’Internet chinois s’effondre de manière inéluctable — et le contenu qui existait avant l’émergence de l’Internet mobile a aujourd’hui presque complètement disparu.

Nous avons longtemps conçu Internet comme une mémoire — nous ne savions pas que c’était celle d’un poisson rouge.

Au cours des deux dernières années, je me suis forgé une conviction très claire : le nombre d’informations que l’on peut trouver en ligne se réduit chaque année à une vitesse vertigineuse. Il n’y a pas si longtemps, je pouvais encore trouver certaines sources originales. C’est progressivement devenu impossible. Je pouvais encore découvrir les discours des protagonistes ou les articles qu’ils avaient écrits — puis je ne les ai plus trouvés. Je pouvais encore voir de nombreuses interviews ou des vidéos en ligne — elles ont été effacées progressivement.

Au cours des deux dernières années, je me suis forgé une conviction très claire : le nombre d’informations que l’on peut trouver en ligne se réduit chaque année à une vitesse vertigineuse.

He Jiayan

Tout se passe comme si un monstre d’un nouveau genre était apparu, qui dévore les pages web tout au long de l’histoire, les avalant du passé vers le présent, d’abord par petites bouchées, puis par grandes bouchées, dévorant tout l’Internet chinois — cinq ans par-ci, dix ans par là.

Lorsque l’on regardera en arrière, on constatera que tout ce qui existait dans l’Internet chinois avant la création du mobile — qu’il s’agisse d’un portail, du site officiel d’une organisation, d’une page web personnelle, de forums publics, des blogs Sina, de posts Baidu, de documents, de photos, de musique, de vidéos, etc. — aura disparu.

Je me souviens qu’il y a plus de dix ans, j’avais l’habitude de changer d’ordinateur parce que mes photos et mes articles compressés étaient stockées sur un serveur. Quelques années plus tard, j’ai découvert que l’entièreté du serveur avait disparu. J’avais l’habitude d’utiliser hotmail, et puis tout a disparu. J’ai également écrit des messages sur RenRen et MySpace… tous disparus.

Tout se passe comme si un monstre d’un nouveau genre était apparu, qui dévore les pages web tout au long de l’histoire.

He Jiayan

Nous pensions qu’Internet pouvait tout sauvegarder. Il s’est avéré qu’il ne peut rien garder.

Pourquoi cela s’est-il produit ?

Je pense qu’il y a deux raisons principales.

La première est d’ordre économique.

Un site Internet nécessite de la bande passante, une salle de serveurs, du personnel pour le gérer et l’entretenir, ainsi qu’une quantité non négligeable de coûts réglementaires et d’entretien divers. S’il existe une valeur stratégique — par exemple, informer de ses activités pour une entreprise — ou une valeur de trafic à court terme — par exemple, s’il y a toujours des gens qui viennent voir le site de temps en temps —, et si en même temps les comptes de l’entreprise ne sont pas mauvais, alors il y aura une raison de maintenir le site en vie.

Mais si l’entreprise est sur une mauvaise pente et n’a plus d’argent, c’est tout le site qu’elle gère qui mourra, tout simplement. Renren en est un exemple typique.

D’un point de vue opérationnel, si une page web n’est pas visitée par quelques personnes tout au long de l’année, elle deviendra un fardeau pour l’entreprise, et le plus rationnel d’un point de vue économique sera de la supprimer — et ce même si l’entreprise n’a pas de problèmes financiers. Les premières années de Sohu ; le site NetEase dont le contenu a été essentiellement perdu ; ainsi que la disparition collective des forums hébergés dans Tianya en sont de bons exemples.

Si une entreprise est sur une mauvaise pente et n’a plus d’argent, c’est tout le site qu’elle gère qui mourra.

He Jiayan

Deuxièmement, les raisons réglementaires.

En général, la réglementation sur Internet augmente progressivement et devient de plus en plus stricte. Le contenu qui pouvait exister légalement auparavant ne répond plus aux exigences réglementaires ; et ce qui pouvait exister dans la grise auparavant a depuis basculé dans la zone rouge. Tous ces contenus sont directement supprimés.

Il y a aussi des cas où la polarisation de l’opinion publique est devenue plus forte avec le temps et où un contenu qui était « tout à fait normal » est devenu très sensible pour l’opinion publique. Même s’il n’est pas illégal, il peut intensifier le conflit et créer de la confusion — si bien que le régulateur peut demander qu’il soit éliminé.

Au-delà des autorités officielles, les internautes en colère agissent parfois comme des modérateurs ou au contraire des faiseurs d’opinion. Ils peuvent faire sortir de l’ombre quelque chose que quelqu’un a posté en passant il y a plus de dix ans, s’y accrocher et cyber-harceler l’auteur jusqu’à sa « mort sociale ».

En Chine, l’effet le plus important de la réglementation n’est donc pas tant ce que font les régulateurs ou les attaques des internautes en colère que l’« autocensure » qu’ils provoquent au sein des entreprises et chez les particuliers.

En Chine, l’effet le plus important de la réglementation n’est donc pas tant ce que font les régulateurs ou les attaques des internautes en colère que l’« autocensure » qu’ils provoquent au sein des entreprises et chez les particuliers.

He Jiayan

Personne ne sait si un contenu sur un site web ou un mot prononcé par quelqu’un entraînera la mort de cette personne dans plusieurs années. Le meilleur moyen de survivre est donc de supprimer toutes ces « bombes à retardement » potentielles — c’est-à-dire de fermer le site web ou d’en supprimer tout le contenu.

Bien entendu, les autres causes sont nombreuses qui peuvent expliquer la disparition d’anciennes pages web.

Peu après la dissolution de l’ex-Yougoslavie, tous les contenus web hébergés sous le nom de domaine international « .yu » — abréviation de Yougoslavie — ont disparu. Un autre exemple est la disparition de sites de musique et de films qui étaient autrefois facilement disponibles pour le téléchargement, en raison du renforcement de la protection des droits d’auteur. Certaines organisations et des individus, pour des raisons purement personnelles, ne veulent parfois simplement plus montrer leurs informations au public et ferment donc leurs sites web officiels ou leurs pages d’accueil personnelles.

Mais ces raisons sont secondaires et localisées.

La disparition systématique et à grande échelle du contenu Internet dans son ensemble est principalement due aux lois économiques et à l’autocensure.

La disparition systématique et à grande échelle du contenu Internet dans son ensemble est principalement due aux lois économiques et à l’autocensure.

He Jiayan

Au fond, tout se passe comme si le contenu d’Internet — à l’instar de la vie — était régi par la théorie de l’évolution. Il n’a qu’un seul critère d’existence : attirer le plus d’attention possible au moindre coût.

Lorsqu’un contenu est capable d’attirer suffisamment d’attention, et que le coût de maintien de ce contenu — en ce compris le coût économique, le coût de la réglementation et le coût de la lutte contre la réglementation — est faible, ce contenu a des chances de survivre sur Internet. Il est probable qu’il changera même de support — par exemple en passant du texte à l’image, de l’image fixe à l’image animée, de l’image animée à la vidéo et, à l’avenir, peut-être de la vidéo bidimensionnelle à la vidéo holographique tridimensionnelle, et ainsi de suite. La plateforme qui sert de véhicule à ce contenu changera également. On passera du portail au forum, aux blogs personnels, au microblogging — et à l’avenir peut-être à une plateforme dont nous ignorons tout pour l’instant.

Lorsqu’un contenu ne peut plus attirer suffisamment l’attention ou que le coût de maintenance de ce contenu est trop élevé, il disparaîtra d’Internet. La disparition collective de l’Internet traditionnel, avec des ordinateurs servant de terminaux de navigation et des pages web comme supports, est simplement le résultat inévitable de cette « concurrence évolutive pour l’information ».

La disparition collective de l’Internet traditionnel est simplement le résultat inévitable d’une « concurrence évolutive pour l’information ».

He Jiayan

Darwin nous a appris que la clef de l’évolution biologique était la « sélection naturelle, la survie du plus apte ». La clef de l’évolution du contenu d’Internet est la « concurrence de l’information, la sélection de l’attention ». En raison de l’effet de réseau, cette concurrence est dix mille fois plus féroce que dans la nature — dix mille fois plus cruelle. L’Internet traditionnel n’emportera pas l’extinction d’une seule espèce mais l’extinction de la quasi-totalité du contenu.

À chaque nouvelle génération d’Internet, l’ancienne génération, arrimée à une structure obsolète, s’effondrera. C’est la destinée de tous les sites web et de tous leurs contenus.

Si Internet est l’avenir de la civilisation, alors notre génération n’aura pas d’histoire — parce qu’Internet n’aura laissé de nous aucune trace.

Si Internet est l’avenir de la civilisation, alors notre génération n’aura pas d’histoire — parce qu’Internet n’aura laissé de nous aucune trace.

He Jiayan

« Pas d’histoire ». Est-ce si important ?

Bien sûr que oui.

Pour écrire un article sur Shao Yibo, j’ai essayé par tous les moyens de mettre la main sur la vidéo originale de la participation de Shao Yibo à l’émission « Boshi Tang » en 2007 ainsi que les posts de sa femme, Bao Jiaxin, sur le site Baby Tree, postés depuis quelques années sous le pseudonyme de « Wen Ai Mummy ». Je ne suis pas parvenu à les retrouver — et je ne peux que le regretter.

Bien que l’article « Red Dust Has Forgotten Shao Yibo » soit toujours très populaire — avec plus de 700 000 lecteurs et 20 000 retweets en seulement une semaine — je suis presque sûr que j’ai dû passer à côté d’informations très importantes et que la qualité de l’article aurait été meilleure si j’avais eu accès à de telles informations.

Vous vous dites peut-être : « cela n’est utile qu’aux chercheurs et aux rédacteurs comme He Jiayan, je n’écris pas d’articles de ce genre, et ce n’est pas comme si cela n’affectait. »

Vraiment ?

Si nous ne pouvons plus avoir accès à tous les discours de Jack Ma, tous les articles de Ren Zhengfei, My Father and Mother et The Spring River Flows East, et tous les messages de Duan Yongping dans Snowball, ne seriez-vous pas un peu triste ?

He Jiayan fait ici référence à des lieux communs de la culture populaire chinoise. Ren Zhengfei est le directeur général de Huawei, My Father and Mother est un film romantique sorti en 2013 et The Spring River Flows East un film datant de 1947 considéré comme un classique du cinéma chinois.

Vous me direz que vous êtes insensibles.

Alors, si nous ne pouvons plus chercher le numéro de Huang Zheng, si nous ne pouvons plus voir les messages de Zhang Yiming ou de Wang Xing, est-ce que vous n’éprouverez pas un peu de regret ?

Vous m’assurerez que vous ne vous sentez pas désolé non plus.

Si un jour, Zhihu disparaît comme Tianya Forum, Douban s’efface comme RenRen, B-site se vide comme Sina Blog — ne ressentirez-vous pas un peu de chagrin ?

Si un jour, les pages Internet de votre blogueur préféré affichent que « l’auteur a choisi de n’afficher que la moitié des posts de l’année » ou que « ce blog n’est plus visible », si vous lisez souvent que « ce compte a été bloqué », que « le contenu ne peut être affiché », si vous recherchez certaines informations dans Shake Voice ou Xiaohongshu, et que les résultats affichent que « l’auteur a effacé tout le contenu »…

Cela ne vous attristera-t-il pas — ne serait-ce qu’un instant ?

Les générations de l’Internet traditionnel, nées dans les années 1970 et 1980 ne peuvent plus retrouver leur histoire. Leurs traces en ont pratiquement disparu.

Les générations de l’Internet traditionnel, nées dans les années 1970 et 1980 ne peuvent plus retrouver leur histoire. Leurs traces en ont pratiquement disparu.

He Jiayan

La nouvelle génération peut encore garder les messages de cercles d’amis privés, mais même de cercle d’amis est, de plus en plus, « visible seulement trois jours ». Messages éphémères… — jusqu’à ce que tout s’efface.

La seule chose qui produit encore du contenu de manière frénétique, c’est le marketing en cascade.

Mais à l’avenir, il y a fort à parier que même ces messages marketing finiront par disparaître.

Si quelque chose est important pour nous et qu’il est en train de disparaître, existe-t-il un moyen de le sauver ?

Certains ont essayé de le faire.

Il existe aux États-Unis un site web appelé Internet Archive, qui se traduit en chinois par « 互联网档案馆 » et qui préserve un grand nombre de pages web originales. Mais pour avoir essayé de l’utiliser, les pages web originales en chinois sont très peu sauvegardées. L’utilisation en est très difficile, les fonctionnalités de recherche assez primitives et inefficaces. En définitive, quantitativement, elles n’ont pas permis de sauvegarder grand-chose.

D’un point de vue technique, il ne devrait pas être difficile de sauvegarder toutes les pages web de l’Internet chinois jusqu’à l’essor de l’Internet mobile au cours des dix dernières années. Et le coût n’en serait pas élevé. Après tout, si on la compare à l’ère actuelle de l’Internet, où la vidéo est hégémonique, cette ère faite de pages web au graphisme limité est négligeable en termes d’espace.

Si on la compare à l’ère actuelle de l’Internet, où la vidéo est hégémonique, cette ère faite de pages web au graphisme limité est négligeable en termes d’espace.

He Jiayan

La question est de savoir qui saura le faire, et mû par quoi.

Les entreprises ne le feront pas. Elles n’y auraient pas d’intérêt commercial.

Le gouvernement pourrait être en mesure de créer des archives qui conservent toutes les pages — tout comme il construit des bibliothèques et des musées. Mais dépenserait-il de l’argent pour cela ? Il semble qu’il n’y ait pas d’autre raison que de préserver l’histoire. Or même si le gouvernement le faisait, cela ne changerait rien pour les utilisateurs ordinaires d’Internet, car ces archives nécessiteraient un traitement immense concernant les données personnelles et ne seraient accessibles qu’à quelques-uns pour éviter précisément tout abus dans l’utilisation de ces données.

D’ailleurs, même si un organe quelconque était disposé à le faire, il serait désormais trop tard. Après l’essor de l’Internet mobile, selon une estimation approximative, plus de 99 % du contenu de l’Internet chinois traditionnel devrait avoir disparu.

D’une certaine manière, les articles que j’ai rédigés ont contribué à la préservation de l’histoire de leurs sujets. Si je n’avais pas écrit sur eux, une grande partie de cette histoire serait déjà introuvable en ligne. Pourtant il ne s’agit pas d’informations originales, mais seulement d’informations de seconde main que j’ai consolidées.

Après l’essor de l’Internet mobile, selon une estimation approximative, plus de 99 % du contenu de l’Internet chinois traditionnel devrait avoir disparu.

He Jiayan

Aujourd’hui, sur tous les événements majeurs qui se sont produits au cours de la première décennie de ce siècle, toutes les célébrités qui ont laissé des traces profondes, les informations que l’on peut encore trouver sur l’Internet chinois sont presque toujours des informations de seconde main éditées par des médias pure player — ou même des informations qui ont été maintes fois reprises et republiées et qui sont depuis longtemps complètement différentes de leur forme originale.

Les rapports originaux, les vidéos originales, les discours originaux, les observations originales des internautes, les commentaires originaux — tout cela a disparu.

Dans quelques années, toutes ces informations de seconde main auront également disparu. Tout se passe comme si ces événements n’avaient jamais eu lieu. Comme si ces personnes n’avaient jamais existé.

Il n’y a rien d’autre à faire que d’accepter la réalité.

À l’ère d’Internet, les vingt premières années du XXIe siècle seront vingt années sans archives historiques.

À l’ère d’Internet, les vingt premières années du XXIe siècle seront vingt années sans archives historiques.

He Jiayan

Si vous pouvez encore voir des informations anciennes de l’Internet chinois aujourd’hui, ce n’est que la dernière lueur du crépuscule.

Si vous êtes saisis par leur nature éphémère, vous pourrez soupirer comme Faust sur son lit de mort implorant l’instant : « Arrête-toi, tu es si beau ! »

Mais cette lueur sera bientôt engloutie par le temps et tombera dans le vide — en même temps que votre exclamation.

Il n’y a pas d’échappatoire.

Presque tout ce que vous voyez et créez maintenant — cet article, cette plateforme — finira par se noyer dans le vide.

Permalink
June 3, 2024 at 8:12:51 PM GMT+2

The 'Dead-Internet Theory' Is Wrong but Feels True - The Atlantichttps://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/

  • Conspiracy
  • Artificial intelligence
  • Conspiracy
  • Artificial intelligence

Maybe You Missed It, but the Internet ‘Died’ Five Years Ago

A conspiracy theory spreading online says the whole internet is now fake. It’s ridiculous, but possibly not that ridiculous?

By Kaitlyn Tiffany

If you search the phrase i hate texting on Twitter and scroll down, you will start to notice a pattern. An account with the handle @pixyIuvr and a glowing heart as a profile picture tweets, “i hate texting i just want to hold ur hand,” receiving 16,000 likes. An account with the handle @f41rygf and a pink orb as a profile picture tweets, “i hate texting just come live with me,” receiving nearly 33,000 likes. An account with the handle @itspureluv and a pink orb as a profile picture tweets, “i hate texting i just wanna kiss u,” receiving more than 48,000 likes.

There are slight changes to the verb choice and girlish username and color scheme, but the idea is the same each time: I’m a person with a crush in the age of smartphones, and isn’t that relatable? Yes, it sure is! But some people on Twitter have wondered whether these are really, truly, just people with crushes in the age of smartphones saying something relatable. They’ve pointed at them as possible evidence validating a wild idea called “dead-internet theory.”

Let me explain. Dead-internet theory suggests that the internet has been almost entirely taken over by artificial intelligence. Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls, and idly curious lovers of chitchat. One might, for example, point to @_capr1corn, a Twitter account with what looks like a blue orb with a pink spot in the middle as a profile picture. In the spring, the account tweeted “i hate texting come over and cuddle me,” and then “i hate texting i just wanna hug you,” and then “i hate texting just come live with me,” and then “i hate texting i just wanna kiss u,” which got 1,300 likes but didn’t perform as well as it did for @itspureluv. But unlike lots of other online conspiracy theories, this one has a morsel of truth to it. Person or bot: Does it really matter?

Read: The internet is mostly bots

Dead-internet theory. It’s terrifying, but I love it. I read about it on Agora Road’s Macintosh Cafe, an online forum with a pixelated-Margaritaville vibe and the self-awarded honor “Best Kept Secret of the Internet!” Right now, the background is a repeated image of palm trees, a hot-pink sunset, and some kind of liquor pouring into a rocks glass. The site is largely for discussing lo-fi hip-hop, which I don’t listen to, but it is also for discussing conspiracy theories, which I do.

In January, I stumbled across a new thread there titled “Dead Internet Theory: Most of the Internet is Fake,” shared by a user named IlluminatiPirate. Over the next few months, this would become the ur-text for those interested in the theory. The post is very long, and some of it is too confusing to bother with; the author claims to have pieced together the theory from ideas shared by anonymous users of 4chan’s paranormal section and another forum called Wizardchan, an online community premised on earning wisdom and magic through celibacy. (In an email, IlluminatiPirate, who is an operations supervisor for a logistics company in California, told me that he “truly believes” in the theory. I agreed not to identify him by name because he said he fears harassment.)

Peppered with casually offensive language, the post suggests that the internet died in 2016 or early 2017, and that now it is “empty and devoid of people,” as well as “entirely sterile.” Much of the “supposedly human-produced content” you see online was actually created using AI, IlluminatiPirate claims, and was propagated by bots, possibly aided by a group of “influencers” on the payroll of various corporations that are in cahoots with the government. The conspiring group’s intention is, of course, to control our thoughts and get us to purchase stuff.

As evidence, IlluminatiPirate offers, “I’ve seen the same threads, the same pics, and the same replies reposted over and over across the years.” He argues that all modern entertainment is generated and recommended by an algorithm; gestures at the existence of deepfakes, which suggest that anything at all may be an illusion; and links to a New York story from 2018 titled “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.” “I think it’s entirely obvious what I’m subtly suggesting here given this setup,” the post continues. “The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population.” So far, the original post has been viewed more than 73,000 times.

Read: Artificial intelligence is misreading human emotion

Obviously, the internet is not a government psyop, even though the Department of Defense had a role in its invention. But if it were, the most compelling evidence to me is the dead-internet theory’s observation that the same news items about unusual moon-related events seem to repeat year after year. I swear I’ve been saying this for years. What is a super flower blood moon? What is a pink supermoon? A quick search of headlines from just this month brings up: “There’s Something Special About This Weekend’s Moon,” “Don’t Miss: Rare, Seasonal ‘Blue Moon’ Rises Tonight,” and “Why This Weekend’s Blue Moon Is Extra Rare.” I just don’t understand why everyone is so invested in making me look at the moon all the time? Leave me alone about the moon!

Dead-internet theory is a niche idea because it’s patently ridiculous, but it has been spreading. Caroline Busta, the Berlin-based founder of the media platform New Models, recently referenced it in her contribution to an online group show organized by the KW Institute for Contemporary Art. “Of course a lot of that post is paranoid fantasy,” she told me. But the “overarching idea” seems right to her. The theory has become fodder for dramatic YouTube explainers, including one that summarizes the original post in Spanish and has been viewed nearly 260,000 times. Speculation about the theory’s validity has started appearing in the widely read Hacker News forum and among fans of the massively popular YouTube channel Linus Tech Tips. In a Reddit forum about the paranormal, the theory is discussed as a possible explanation for why threads about UFOs seem to be “hijacked” by bots so often.

The theory’s spread hasn’t been entirely organic. IlluminatiPirate has posted a link to his manifesto in several Reddit forums that discuss conspiracy theories, including the Joe Rogan subreddit, which has 709,000 subscribers. In the r/JoeRogan comments, users argue sarcastically—or sincerely?—about who among them is a bot. “I’m absolutely the type of loser who would get swindled into living among bots and never realize it,” a member of the 4chan-adjacent Something Awful forum commented when the theory was shared there in February. “Seems like something a bot would post,” someone replied. Even the playful arguments about how everything is the same are the same.

Read: Why is Joe Rogan so popular?

That particular conversation continued down the bleakest path imaginable, to the point of this comment: “If I was real I’m pretty sure I’d be out there living each day to the fullest and experiencing everything I possibly could with every given moment of the relatively infinitesimal amount of time I’ll exist for instead of posting on the internet about nonsense.”

Anyway … dead-internet theory is pretty far out-there. But unlike the internet’s many other conspiracy theorists, who are boring or really gullible or motivated by odd politics, the dead-internet people kind of have a point. In the New York story that IlluminatiPirate invokes, the writer Max Read plays with paranoia. “Everything that once seemed definitively and unquestionably real now seems slightly fake,” he writes. But he makes a solid argument: He notes that a majority of web traffic probably comes from bots, and that YouTube, for a time, had such high bot traffic that some employees feared “the Inversion”—the point when its systems would start to see bots as authentic and humans as inauthentic. He also points out that even engagement metrics on sites as big and powerful as Facebook have been grossly inflated or easily gamed, and that human presence can be mimicked with click farms or cheap bots.

Some of this may be improving now, for better or for worse. Social-media companies have gotten a lot better at preventing the purchase of fake views and fake likes, while some bot farmers have, in response, become all the more sophisticated. Major platforms still play whack-a-mole with inauthentic activity, so the average internet user has no way of knowing how much of what they see is “real.”

But more than that, the theory feels true: Most weeks, Twitter is taken over by an argument about how best to practice personal hygiene, or which cities have the worst food and air quality, which somehow devolves into allegations of classism and accusations of murder, which for whatever reason is actually not as offensive as classism anymore. A celebrity is sorry. A music video has broken the internet. A meme has gotten popular and then boring. “Bennifer Might Be Back On, and No One’s More Excited Than Twitter.” At this point, you could even say that the point of the theory is so obvious, it’s cliché—people talk about longing for the days of weird web design and personal sites and listservs all the time. Even Facebook employees say they miss the “old” internet. The big platforms do encourage their users to make the same conversations and arcs of feeling and cycles of outrage happen over and over, so much so that people may find themselves acting like bots, responding on impulse in predictable ways to things that were created, in all likelihood, to elicit that very response.

Thankfully, if all of this starts to bother you, you don’t have to rely on a wacky conspiracy theory for mental comfort. You can just look for evidence of life: The best proof I have that the internet isn’t dead is that I wandered onto some weird website and found an absurd rant about how the internet is so, so dead.

Permalink
May 31, 2024 at 10:31:44 AM GMT+2

Disrupting deceptive uses of AI by covert influence operations | OpenAIhttps://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/

  • Artificial intelligence
  • Psychology
  • PsyOps
  • Politics
  • War
  • Artificial intelligence
  • Psychology
  • PsyOps
  • Politics
  • War

Disrupting deceptive uses of AI by covert influence operations

We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services.

OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content. That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.

In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services.

This blog describes the threat actors we disrupted, attacker trends we identified, and important defensive trends - including how designing AI models with safety in mind in many cases prevented the threat actors from generating the content they desired, and how AI tools have made our own investigations more efficient. Alongside this blog, we are publishing a trend analysis that describes the behavior of these malicious actors in detail.

Read the full report(opens in a new window)

Threat actors work across the internet. So do we. By collaborating with industry, civil society, and government we tackle the creation, distribution, and impact of IO content. Our investigations and disruptions were made possible in part because there’s been so much detailed threat reporting over the years by distribution platforms and the open-source community. OpenAI is publishing these findings, as other tech companies do, to promote information sharing and best practices amongst the broader community of stakeholders.

Disruption of covert influence operations

Over the last three months, our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts.

Specifically, we disrupted:

  • A previously unreported operation from Russia, which we dubbed Bad Grammar, operating mainly on Telegram and targeting Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used our models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on Telegram.
  • An operation originating in Russia known as Doppelganger(opens in a new window). People acting on behalf of Doppelganger used our models to generate comments in English, French, German, Italian and Polish that were posted on X and 9GAG; translate and edit articles in English and French that were posted on websites linked to this operation; generate headlines; and convert news articles into Facebook posts.
  • A Chinese network known as Spamouflage(opens in a new window), which used our models to research public social media activity, generate texts in languages including Chinese, English, Japanese and Korean that were then posted across platforms including X, Medium and Blogspot, and debug code for managing databases and websites, including a previously unreported domain, revealscum[.]com.
  • An Iranian operation known as the International Union of Virtual Media(opens in a new window) (IUVM), which used our models to generate and translate long-form articles, headlines and website tags that were then published on a website linked to this Iranian threat actor, iuvmpress[.]co;
  • Activity by a commercial company in Israel called STOIC, because technically we disrupted the activity, not the company. We nicknamed this operation Zero Zeno, for the founder of the stoic school of philosophy. The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation.

The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.

So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services. Using Brookings’ Breakout Scale,(opens in a new window) which assesses the impact of covert IO on a scale from 1 (lowest) to 6 (highest), none of the five operations included in our case studies scored higher than a 2 (activity on multiple platforms, but no breakout into authentic communities).

Attacker trends

Based on the investigations into influence operations detailed in our report, and the work of the open-source community, we have identified the following trends in how covert influence operations have recently used artificial intelligence models like ours.

  • Content generation: All these threat actors used our services to generate text (and occasionally images) in greater volumes, and with fewer language errors than would have been possible for the human operators alone.
  • Mixing old and new: All of these operations used AI to some degree, but none used it exclusively. Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts or memes copied from across the internet.
  • Faking engagement: Some of the networks we disrupted used our services to help create the appearance of engagement across social media - for example, by generating replies to their own posts. This is distinct from attracting authentic engagement, which none of the networks we describe here managed to do to a meaningful degree.
  • Productivity gains: Many of the threat actors that we identified and disrupted used our services in an attempt to enhance productivity, such as summarizing social media posts or debugging code.

Defensive trends

While much of the public debate so far has focused on the potential or actual use of AI by attackers, it is important to remember the advantages that AI offers to defenders. Our investigations also benefit from industry sharing and open-source research.

  • Defensive design: We impose friction on threat actors through our safety systems, which reflect our approach to responsibly deploying AI. For example, we repeatedly observed cases where our models refused to generate the text or images that the actors asked for.
  • AI-enhanced investigation: Similar to our approach to using GPT-4 for content moderation and cyber defense, we have built our own AI-powered tools to make our detection and analysis more effective. The investigations described in the accompanying report took days, rather than weeks or months, thanks to our tooling. As our models improve, we’ll continue leveraging their capabilities to improve our investigations too.
  • Distribution matters: Like traditional forms of content, AI-generated material must be distributed if it is to reach an audience. The IO posted across a wide range of different platforms, including X, Telegram, Facebook, Medium, Blogspot, and smaller forums, but none managed to engage a substantial audience.
  • Importance of industry sharing: To increase the impact of our disruptions on these actors, we have shared detailed threat indicators with industry peers. Our own investigations benefited from years of open-source analysis conducted by the wider research community.
  • The human element: AI can change the toolkit that human operators use, but it does not change the operators themselves. Our investigations showed that these actors were as prone to human error as previous generations have been - for example, publishing refusal messages from our models on social media and their websites. While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.

We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use. Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.

Permalink
May 31, 2024 at 10:28:35 AM GMT+2

Cyber Security: A Pre-War Reality Check - Bert Hubert's writingshttps://berthub.eu/articles/posts/cyber-security-pre-war-reality-check/

  • Cybersecurity
  • War
  • Spying
  • Cybersecurity
  • War
  • Spying

Cyber Security: A Pre-War Reality Check

Posted on May 14 2024

This is a lightly edited transcript of my presentation today at the ACCSS/NCSC/Surf seminar ‘Cyber Security and Society’. I want to thank the organizers for inviting me to their conference & giving me a great opportunity to talk about something I worry about a lot. Here are the original [slides with notes](https://berthub.eu/prewar/ncsc accss surf keynote bert hubert-notes.pdf), which may be useful to view together with the text below. In the notes there are also additional URLs that back up the claims I make in what follows.

So, well, thank you so much for showing up.

And I’m terribly sorry that it’s not going to be a happy presentation.

This is also sort of an academic environment, and this is not going to be an academic talk. This is not going to be subtle. But I’m trying to alter, to modulate your opinion on the world of cyber security a little bit.

Cyber security and society, a pre-war reality check

We’re already worried enough about cyber security. Is anyone here not worried about cyber security? And you could go home now, otherwise. Okay, that’s good. So you can all stay.

First, some important words from Donald T:

“I know it sounds devastating, but you have to get used to the fact that a new era has begun. The pre-war era.”

And this comes from Donald Tusk, the Polish Prime Minister from 2007 to 2014.

And at the time, he, and the Baltic states, said that Russia was a real threat. And everyone’s like, yeah, yeah, it’ll last. And we’ll just do so much business with them that we will not get bombed. And that did not work.

And now Donald Tusk is again the Prime Minister of Poland. And he’s again telling us that, look, we are in a bad era and we are underestimating this.

We are used to thinking about cyber security in terms of can we keep our secrets safe? Are we safe against hackers or ransomware or other stuff? But there is also a war dimension to this. And this is what I want to talk about here.

So briefly, Nicole already mentioned it, I’ve done a lot of different things, and this has given me varied insights into security. I’ve worked with Fox-IT for a long while. PowerDNS should not be a well-known company. But it delivered services to KPN, Ziggo, British Telecom, Deutsche Telekom. And they all run their internet through the PowerDNS software.

And through that, I got a lot of exposure to how do you keep a national telecommunications company secure.

And can the national telecommunications companies keep themselves secure?

And that was useful.

I spent time at intelligence agencies, I spent time regulating intelligence agencies. And that may be also useful to talk about a little bit. Through that regulatory body, for nearly two years, I got a very good insight into every cyber operation that the Dutch government did. Or every cyber operation that was done on the Dutch government.

I cannot tell you anything about that stuff. But it was really good calibration. You know what kind of stuff is going on. Uniquely to the Netherlands is that this board, which regulates the intelligence agencies, actually has two judges, the little guy on the left and on the right:

And in the middle, there was someone with different experience. That’s what the law says. They couldn’t get themselves to say someone with technical experience. It was a bridge too far. But at least they said we can have someone with different experience.

And this is unique in Europe, that there is an intelligence agency that is being regulated with an actual technical person in there. And we’ll come to why that is important later.

So everyone is of course saying, look, the cyber security world is just terrible and doomed. And someone is going to shut off our electricity and kill our internet and whatever. Or disable a hospital. And so far, not a lot of this stuff has actually been happening.

In 2013, Brenno de Winter wrote a book called The Digital Storm Surge, in which he said, look, we haven’t had any real cyber incidents that really speak to the imagination. So we’ve had, of course, the printer is down. The printer is always down.

We don’t actually rely on computers that much, because they break all the time. So we do not have a lot of life and death situations involving computers.

Brenno, in 2013, predicted that we would only take cyber security seriously once we had the kind of incident where lots of self-driving cars, who can avoid pedestrians, that you flip one bit. And they all start aiming at pedestrians.

And you get like thousands of people dead because all kinds of cars decide to drive over people. And he mentioned there that before the sinking of the Titanic, there was no regulation for how to build ships.

So you could just build a ship and if it looked good, people assumed that it would also be good. And only after the Titanic, they started saying, oh, we need to have steel that’s this thick, and you can have the steam engine, not here, it must be there. So he said the Titanic was the regulatory event for ship building.

And in 2013, Brenno said we have not had anything serious yet, and we will only get serious legislation once the Titanic sinks. And it didn’t sink.

However, the EU got vision.

his is the most optimistic slide in the whole presentation.

For some reason, the EU decided that this couldn’t go on. And so they launched like six or seven new laws to improve the state of our cybersecurity.

And this is like the sinking of the Titanic. So we’re not properly realizing how much work this is going to be. Because the thing is, they’ve written all these laws already, and only one of them is sort of semi-active right now, and the rest is still coming.

So this is our post-Titanic environment, and this might improve the situation of cybersecurity somewhat. Because it’s like terrible.

So some real cyber incidents, real stuff that broke war.

This is the former president of Iran, Mahmoud Ahmadinejad. And here he is admiring his uranium ultracentrifuge facilities.

And this was the famous Stuxnet operation, where apparently the West was able to disable the ultracentrifuges used to create highly enriched uranium.

And not only did they disable it, like the factory is down now, it tore itself to shreds physically.

So this is one of the few sort of military cyber attacks that we know about.

This is like one third of them. The other one that happened was just before Russia invaded Ukraine, they managed to disable the Viasat modems. And this is an interesting case. These modems are used for satellite communications. And they were able to attack these modems so that they physically disabled themselves.

It was not like the denial of service attack on the network. No, they managed to wipe the firmware of all these modems in such a way that it could not be replaced.

The reason we know about this stuff so well is it turns out there were lots of windmills that also had these modems.

In Germany, apparently 4,000 of these modems stopped working. And there were 4,000 wind turbines that could no longer be operated. So this was a military cyber attack that happened as Russia was invading Ukraine. And it was of great benefit to them because it disabled a lot of military communications in Ukraine.

But this is the kind of thing that can happen, only that it’s quite rare.

Earlier, Russia disabled a lot of the electricity networks in Ukraine using a similar kind of attack. And it turned out that the Ukrainians were so good (and their systems so simple and robust) that they had a disruption of like only six hours, which is really impressive.

And I want you to imagine already what would happen if we had such an attack on a Dutch power company. They’re very nimble [irony, they are not]. I mean, try asking a question about your invoice.

So I’m going to talk about rough times. And I started my presentation with Donald Tusk telling us we are in a pre-war era, and I truly believe that. But it’s a difficult thing to believe. I also do not want to believe it. I also want to be like, no, this stuff is over there in Ukraine. It’s not here. But even if you think there’s only a 10% chance, then it’s quite good to already think about this kind of stuff.

Even if you are such a diehard pacifist that you are convinced that it’s never going to happen, you can just imagine that I’m talking about robustness in the face of climate change.

Because also then you want to have your stuff that works.

So there are three things I identified, that you really care about in a war, in a chaotic situation where there’s no power.

You want infrastructure that is robust, that does not by itself fall over.

If we look at modern communications tools, like, for example, Microsoft 365, that falls over like one or two days a year without being attacked. It just by itself already falls over. That’s not a robust infrastructure in that sense.

Limited and known dependencies.

Does your stuff need computers working 5,000 kilometers away? Does your stuff need people working on your product 5,000 kilometers away that you might no longer be able to reach?

So, for example, if you have a telecommunications company and it’s full of telecommunications equipment and it’s being maintained from 5,000 kilometers away, if something goes wrong, you better hope that the connection to the people 5,000 kilometers away is still working, because otherwise they cannot help you.

The third one, when things go wrong, you must be able to improvise and fix things. Truly own and understand technology.

For example, you might not have the exact right cable for stuff, and have to put in an unofficial one.

You might have to fix the firmware yourself. You must really know what your infrastructure looks like.

Let’s take a look at these three aspects of modern communications methods. And we’re going to start with one of my very favorite machines, and I hope you will love this machine as much as I do.

This is the sound-powered phone. So a sound-powered phone is literally what it is. It’s a piece of metal. It probably has, like, five components in there. And out comes a wire. Even the wire is actually in some kind of steel tube. And this thing allows you to make phone calls without electricity.

So if your ship is on fire, and you need to call to the deck and say, “Hey, the ship is on fire,” this thing will actually work, unlike your voice-over-IP setup, which, after the first strike on your ship, and there’s been a power dip, and all the servers are rebooting, this thing will always work.

If you try to break it, you could probably strike it with a hammer. It will still work. It’s very difficult to disable this machine. Attempts have been made to disable it, because it’s so ridiculously simple that people think we must make progress, and we must have digital phones. And, well, this machine is still going strong. And people have tried to replace it, but in war-fighting conditions, this is the kind of machine that you need. This one can make calls to ten different stations, by the way. It’s even quite advanced. And they can make phone calls over cables that are 50 kilometers long. So it’s a very impressive machine.

And now we’re going to head to some less impressive things.

This was the Dutch Emergency Communication Network (Mini-noodnet). There is not much known about this Emergency Communication Network, although Paul might know a few things. [Paul confirms that he does] Because a lot of this stuff is sort of semi-classified, and they’re not really telling anyone about it.

But this was a copper wire network through 20 bunkers in the Netherlands, which was independent completely from the regular telephone network. It was a very simple telephone network, but it was supposed to survive war and disasters. And it had these 20 bunkers. It had guys like this guy running it. And it was fully redundant. You can see that because the top rack has B on it, and the other one has A on it. It was actually fully redundant. It was really nice stuff.

And of course, we shut it down.

Because it’s old stuff, and we need to have modern stuff. And it’s very sad. Because it has now been replaced by this:

They tried to sort of renew this emergency telephone network, but no one could do it anymore. And then they said, “Look, we’re just going to ask KPN.” And we have DSL modems, and we use the KPN VPN service. And this (the Noodcommunicatievoorziening) is now supposed to survive major incidents.

And of course, it will not.

Because every call that you make through this emergency network passes through all of KPN, like 20 different routers. And if something breaks, then this is likely the first thing that will break.

During a power outage a few years ago, there was an attempt to use the system, and it turned out that didn’t work. Because the power was out. Yeah, it’s embarrassing, but that’s what happened.

So we’ve made the trip from this wonderful thing to this pretty impressive thing to this thing. And then we have Microsoft Teams. Which is a very…

I know there are Microsoft people in the room, and I love them. When it works, it’s great. I mean, it exhausts the battery of my laptop in 20 minutes, but it’s very impressive.

And you have to realize that it works like almost always. Maybe not always audio and stuff, but quite often it will work.

So we’ve made this trip from here (sound powered phone) to here (Teams). And that’s not good. And I want to show you, (big WhatsApp logo). This is the actual Dutch government emergency network.

Which is interesting in itself, because it’s actually sort of really good at these short text-based messages. So if you want to have a modern emergency network, it could look a lot like WhatsApp. In terms of concept. Except that we should not have chosen the actual WhatsApp to do this stuff.

Because if the cable to the US is down, I can guarantee you WhatsApp is also down. So this is an emergency network that is itself not super redundant. But it’s very popular in times of disaster.

We know this because after a disaster, people do an investigation to figure out how did the communications go. And you have all these screenshots of these WhatsApp groups. So I’m not knocking it because it actually works.

Unlike this thing (the modern Voip NCV). It’s not that expensive though. They just renewed it. It’s like six million euros a year. It’s not bad.

So how bad is losing communications? The Dutch road management people (Rijkswaterstaat) have a very good Mastodon account and also a Twitter account, I assume.

Where they will almost every day tell you, look, there’s a bridge, and it won’t close. And then they say, and I find this fascinating, they say, yeah, we called the engineer. So it says here, de monteur. We called de monteur.

It is like they have one of these guys who sits there with a van, and they’re waiting for a call,

I assume they have multiple ones.

But still, you could disrupt all of the Netherlands if you just put the bridges open. So if you have any kind of war kind of situation, you’re trying to mobilize, you’re trying to get the tanks from A to B, apparently you can just shut down the bridge.

And it happens a lot. And then you need to reach the engineer. But you have to use a phone to do that. Because I assume that this engineer sits there waiting until the phone rings. And let’s say the phone does not ring, because the phone network is down, then your bridge stays open.

But also you have to find the phone number of the engineer, of course, and that might well be hiding out in an Excel sheet in your cloud environment. So that means that the effective chain to get this bridge fixed, the bridge fixed in 2024, likely includes a completely working cloud environment and a phone environment, and then hoping that the guy with the van manages to get there, and that he does not have an electric van, which also needs a cloud to drive.

The picture on the left is, of course, well known. It’s used to illustrate that all the world of digital infrastructure often depends on just one person, which is bad enough.

But actually my thesis is this entire stack is way too high.

So if you want to run a modern society, we need all the power to be on everywhere. We need the cables to the US to be working. We need the cloud to be working. We need the phone to be working.

That’s a far cry from this lovely machine (the sound powered phone), which always works.

So I’m a bit worried that if we have panic, if we have flooding or an invasion or an attack or whatever.

I think that our infrastructure will not hold up.

I also want to mention this one. This is the Botlek Bridge. This is a modern bridge. And this bridge has failed 250 times. And in its initial years, it would fail like 75 times a year.

And when this fails, then the consequences are huge because it’s the one way that truck traffic can get from A to B. And it has failed in total hundreds of times. And for years, no one could figure out why.

So it would just block. It would no longer go up and down. And a whole task force, they took one of the engineers and they put them in a van over there. And they made them live there. They had live-in engineers here to just work on this thing if it broke. And through that work, they managed to sort of halve the downtime of this bridge.

It has its own website, this bridge, to keep track of the outages. And it has its own SMS service where it will send you text messages if it is broken (“Sms ‘BBRUG AAN’ naar 3669”, not kidding).

Because it was broken that much. Then after many years, they found out how that happened. And the story was, there is a system in there that manages the state, the sensors, and that server had a rotten ethernet cable or port.

And during that two-year period, everyone thought, it cannot be the computer. No one came and said, shall we just replace all the cables and ethernet ports for once and see what happens? We lacked the expertise.

And this is the third component I mentioned in the things that you really care about. Do you have sufficient ownership and knowledge of your own infrastructure that you can repair it?

And here, that apparently took more than three years. Maybe they just solved it by accident because someone needed that cable for their other computer.

I don’t know. But it’s super embarrassing. This is a sign that you do not have control over your own infrastructure.

That you have a major bridge and for three years long, you do not manage to find out what is wrong with it. And I worry about that.

Now it’s time for a little bit of good news. This is another big infrastructure project in the Netherlands. It’s the Maeslantkering.

And it protects us against high water. It’s a marvelous thing. It’s very near my house. Sometimes I just go there to look at it because I appreciate it so much. This machine is, again, this is the sound-powered phone infrastructure.

So you see here these two red engines that are used to push the thing close. That’s literally all they do. They only push it close. And when I visited, they said that actually, even if these engines didn’t work, they had another way of pushing it close. Because you actually need to close it when the water is really high.

And it doesn’t even need to close completely. It’s a completely passive thing. It has no sensors. So this shows that it could also be done. You can make simple infrastructure, and this is actually one of the pieces that works. They tried to mess it up by giving people some kind of weird, newly-Dutch-invented computer in here, which turned out to be bullshit. But that only takes the decision if it should close or not.

It’s a very lovely machine. So I would love to see more of this. I’d love to see more of this and less of this (Botlek bridge). Even though the pictures are marvelous.

So where are we actually with the cybersecurity? How are things going? Could we stand up to the Russian hackers? Not really.

Four years ago, we had this big discussion about 5G and if we should use Chinese infrastructure for our 5G telephony.

And everyone talking about that, politicians, thought that was a big choice that had to be made then.

And the reality was, when this decision was being taken, the Chinese were literally running all our telecommunications equipment already. But that is such an unhappy situation that people were like, “La, la, la, la, la.”

They were pretending that up to then, we were in control of our telecommunications infrastructure and we were now deciding to maybe use Chinese equipment. And that maybe that Chinese equipment could backdoor us.

But the reality was (and still partially is), they were actually running our infrastructure. If they wanted to harm us, the only thing they had to do was to stop showing up for work.

And this is still a very inconvenient truth. So I wrote this like four years ago, and it got read at the European Commission. Everyone read it. And people asked me, even very senior telco people, they said, “No, it’s not true.” And so I asked them, “So where are your maintenance people then?” So you can go to, for example, kpn.com and their job vacancies. And you will see that they never list a job vacancy that has anything to do with 5G. Because they are not running it.

And if we realized earlier that in a previous century, we had 20 bunkers with our own independent telecommunications infrastructure, because we realized that telecommunications was like really important. And now we have said, “No, it’s actually fine.” It’s being run straight from Beijing. That’s a bit of a change.

So things are not good. People want to fix this, and they are making moves to fix the situation, but we aren’t there yet.

Google, Microsoft, AWS

So these are our new overlords. This is the cloud. This is the big cloud. This is apparently, according to Dutch government and semi-government agencies, these are the only people still able to do IT.

We had a recent situtation in the Netherlands where the maintainers of .nl, and I know you’re here, decided that no one in Europe could run the IT infrastructure the way they wanted it anymore, and that they had to move it very far away.

At this point, I want to clarify, some very fine people are working here (in the cloud) I’m not saying here that these are all terrible people. I AM saying there are many thousands of kilometers away, and may not be there for us in a very bad situation.

But apparently this is the future of all our IT. And I’ve had many talks in the past few weeks on this subject, and everyone in industry is convinced that you can no longer do anything without these three companies.

And that leads to this depressing world map, where we are in the middle, and we sort of get our clouds from the left, and the people maintaining that come from the right.

And we make cheese, I think. Really good cheese. And art. And handbags. Actually, one of the biggest Dutch companies, or European companies, is a handbag company. Very excellent. Louis Vuitton. It’s apparently a Dutch company. I didn’t know that either, but for tax reasons. We’re very good at tax evasion here, by the way.

And interestingly, it’s good to look at this exciting arrow here, because we see a lot of telecommunications companies are now moving to Ericsson and Nokia equipment, which is great.

But the maintenance on your Ericsson equipment is not done by a guy called Sven.

The maintenance is actually coming from the fine people from far away. These are actually maintaining our infrastructure.

The problem is they’re very far away. The other problem is that both China, where a lot of the infrastructure actually still comes from, and India, are very closely aligned to Russia.

So we have effectively said, we’ve outsourced all our telecommunications stuff, so this is where the servers are being operated from, and these are the people that are actually maintaining the servers. And all of these places are geopolitically worrying right now, because we don’t know who wins the elections. It could be a weird guy.

And both India and China are like, “Oh, we love Russia.” How much fun would it be if our telcos were being attacked by Russian hackers, and we hope that Infosys is going to come to our rescue?

They might be busy. They could well have other important things to do.

In any other case, we are not going to save our own telecommunications companies, because we are not running them ourselves.

Oh, again, to cheer you up a little bit. This is a map of Europe, and this is within this small area. This is where all the high-tech chip-making equipment in the whole world gets made. It is not that we are completely helpless. I just said we were very good with cheese. Actually, we’re also very good with high-end optics and making chip making equipment and stuff. So it’s not that we’re completely helpless. It’s just that we’ve chosen to focus on handbags and extreme UV optics, and not running our own vital infrastructure.

So what’s the situation? Joost Schellevis, he’s a Dutch journalist, and he recently decided on a weekend to just scan the Dutch Internet to see if he could find anything broken with it. And within a weekend of work, he found 10,000 places that were just open for hackers. And this turned into a news item on the Dutch national news, and people said, “Yeah, yeah, yeah, that’s how it is.” That’s not the sort of war-like situation, that if a random journalist – and Joost is very good – but if a random journalist can just sit there in a weekend and find 10,000 places he can hack, things are not good.

I know the NCSC and other places are working on it and improving it, and they can now scan for such weaknesses. But until quite recently, journalists could scan for these things, and the Dutch government could not, because of legal reasons.

So it’s not good. The other thing I want to focus – and that’s really worrying – if we want to improve our security, it would be nice if we could tell companies, “You just need to install the right equipment. Just get good equipment, and you will be secure.” And that’s not the world we’re living in right now.

And all these places are not secure right now. So if you tell people, “Get a good firewall,” I currently have no advice for you, because all the “good ones” are actually not good. Most big security vendors right now are delivering terribly insecure products, with hundreds of issues per year.

You could not really recommend this based on just the statistics. Yet we are still doing it, because that’s the stuff that we used to buy. Again, this is a peacetime choice. In peacetime, you say, “Hey, I buy this stuff because it’s certified, because we bought it last year, and it was fine then, too.” Well, actually, it was not fine then, too, but we just – and we just keep on buying shitty stuff.

And we get away with this for now. But Ukraine does not get away with this,

And just for your calibration, we are sort of – we are no longer really impressed by it, but if you look at the weekly or monthly security updates that come to us from the big security vendors, they just go out, “Yeah, we have 441 new security problems for you this month. “And there’s Oracle, and then there’s Microsoft. “Yeah, we have 150.” And this repeats sort of every month. And I’m not going to pick on Microsoft or Oracle specifically, but it is – we’ve sort of assumed that it’s okay if you just say, “Yeah, we have 1,000 new security vulnerabilities to deal with every month from our different vendors.” We cannot have this and assume that things will be good. Yet that is what we do.

And I love this one. So you might think that, look, the hackers have become really good, really advanced. That’s why we keep finding all these security issues. And it turns out that’s not the case.

The security issues that are being found are still extremely basic. So this is, for example, help desk software that people use so that the help desk can take over your computer and stuff. And it turns out that if you connected to this appliance and you added one additional slash at the end of the URL, it would welcome you as a new administrator, allowing you to reset the password.

And this is not even – I mean, this is par for the course, because, for example, here we have GitLab, which people use to securely store their source code because they don’t want to put it on the public Internet, so they put it on their own Internet. And it has a “forgot your password” link. And it turns out that if you provide it with two email addresses and you click on “forget your password,” it will send a reset link to the second email address.

But it checked only the first email address to see if you were really the administrator. And this was in GitLab for like six months.

Many of the recent security incidents are of this level. There are, of course, very advanced attacks as well, but quite a lot of this stuff is childishly simple things.

Ivanti, if you work for the Dutch government, you will very frequently see this screen when you log in. The U.S. government has disallowed the use of this software. They have said, “You can no longer use this software.” And the Dutch government says, “Well, we put another firewall in front of it, and it’s good now.”

You can see that above in the circle. This is the elite hacking technique. Dot, dot, slash. And it still works, 2024.

So the situation is not good.

So let’s move to the cloud and fix all these things.

Again, I want to apologize to the Microsoft people because I should have diversified my hate a little bit.

Microsoft said, “Yeah, it seems that we’ve been sort of compromised, but we’re on top of it.”

And then after a while, they said, “Well, yeah, actually…”

The one fun thing, if you really want to know how it is with the security of a company, you should go to their stock exchange information because there you have to admit all your problems. And if you do not admit your problems there, the board of directors goes to jail, which makes them remarkably honest. It’s very good. If you read this from most vendors, you just cry because it’s like, “Yeah, we know. Basically everything we do is broken,” it says there. Here at the Microsoft one, Microsoft says, “Yeah, turns out when we sort of looked again, we were sort of still hacked.”

Oh, okay.

And then came the Cyber Safety Review Board in the US, which has awesome powers to investigate cyber incidents, and you really must read this report.

Microsoft is actually a member of this board, which is what makes it interesting that they were still doing a very good investigation. And they said, “Yeah, it’s actually sort of… We’re full of Chinese hackers, and we’re working on it. Work in progress.”

So if you just say, “Let’s just move to the cloud,” your life is also not suddenly secure.

That’s what I’m saying.

And meanwhile, we have decided in Europe to move everything to these clouds. The Dutch government has just managed to come up with a statement that they said that there are a few things that they will not move to the cloud. And these are the classified things and the basic government registrations.

So that’s the kind of thing that if you add something to the basic registration, you can create people.

And they said, “That’s not going to the cloud.” But basically, everything else is on the table. And we have no choice with that really anymore, because what happens, if you used to run your own applications, if you used to run your own IT infrastructure, and then you say, “We’re going to move everything to the cloud,” what happens to the people that were running your IT infrastructure? They leave. You often don’t even have to fire them, because their work gets so boring that they leave by themselves.

And that means that you end up with organizations that have started moving all the things to the cloud.

And now, if you don’t pay very close attention, you will end up with no one left that really knows what’s going on. And that means that you have to actively say:

“Okay, we know that we’re going to outsource almost everything, but we’re going to retain this limited number of staff, and we’re going to treat them really well, so that we at least, in theory, still know what is going on.”

This is not happening. So the good technical people are leaving everywhere. They actually often start working for one of these clouds, at which points they’re out of reach, because you never hear from Amazon how they do things.

This is a something we are messing up, And this is making us incredibly vulnerable, because we now have these important places that have no one left that really knows what the computer is doing.

Belle, in her opening, she mentioned, “How could you be a manager of a subject that you don’t know anything about?” And I think that it’s very good that you mentioned that, because in many other places, this is apparently not a problem.

So you could be the director of whatever cloud strategy, and you’re like, “Hey, I studied law.” And of course, it’s good that you study law, but it’s good also to realize it might be nice if you have a few people on the board that actually know what a computer does.

And this is one of the main reasons why this is happening. Our decision-making in Europe, but especially in The Netherlands, is incredibly non-technical.

So you can have a whole board full of people that studied history and art and French, and they sit there making our cloud decisions. And they simply don’t know.

And if there had been more nerds in that room, some of these things would not have happened. And that is also a call to maybe us nerds, although you don’t really look that nerdy, but do join those meetings.

Because quite often, we as technical people, we’re like, “Ah, these meetings are an interruption of my work, and I’m not joining that meeting.” And while you were not there, the company decided to outsource everything to India.

And again, there’s nothing against India, but it’s very far away.

This stuff cannot go on like this. This is a trend, a trend where we know ever less about what we are doing, where we are ever more reliant on people very far away.

The trend has already gone too far, but it’s showing no sign of stopping. It is only getting worse.

And this is my worst nightmare.

Ukraine was already at war for two years and battle-hardened. So anything that was simple to break was already broken by the Russians. Then after two years, the Russians managed to break Kyivstar, one of the biggest telecommunications companies of Ukraine, This was a very destructive attack. But the Ukrainians (in and outside Kyivstar) are good enough that in two days they were back up and running, because these people were prepared for chaos.

They knew how to restore their systems from scratch. If we get an attack like this on VodafoneZiggo or on Odido, and they don’t get external help, they will be down for half a year, because they don’t know anything about their own systems.

And I’m super worried about that, because we are sitting ducks. And we’re fine with that.

So just a reminder, when times are bad, you are much more on your own, and no one has time for you.

If something goes wrong, remember the corona crisis when we couldn’t make these personal protective equipment, these face masks.

We couldn’t make them. And we had to beg people in China if they please had time to make a few for us. Can you imagine in a war situation that we have to beg India to please, or in a different situation where we have to beg the Donald Trump administration, if they would please, please fix our cloud issues here?

It’s a worrying thought, being that dependent. And we’re not good on any of these fronts right now.

So we’re rounding off. Is there a way back? Can we fix it?

And I made a little attempt myself.

I needed to share images with people, and I did not want to use the cloud, so I wanted to have an image sharing site. And I found out that the modern image sharing site, like Imgur, is five million lines of code and complexity.

That means it’s exceptionally vulnerable, because those five million lines will have a lot of vulnerabilities.

But then I decided, I wrote my own solution, a thing of 1,600 lines of code, which is, yeah, it’s like thousands of times less than the competition.

And it works. It’s very popular. The IEEE picked it up. They even printed it in their paper magazine. I got 100 emails from people saying that it’s so nice that someone wrote a small piece of software that is robust, does not have dependencies, you know how it works.

But the depressing thing is, some of the security people in the field, they thought it was a lovely challenge to audit my 1,600 lines of code. And they were very welcome to do that, of course. And they found three major vulnerabilities in there.

Even though I know what I’m doing. I’m sort of supposed to be good at this stuff. And apparently, I was good at this stuff because I invited them to check it. And they found three major issues. And it makes me happy that you can still make this small, robust code. But it was depressing for me to see that even in 1,600 lines, you can hide three serious security vulnerabilities.

What do you think about 5 million lines? That’s basically insecure forever. So this was a little attempt to fight my way back. And at least many people agreed with me. That’s the most positive thing I can say about that.

But in summary, the systems that support our daily lives are way too complex and fragile. They fail by themselves.

So when a big telco has an outage, it is now always a question, is this a cyber thing or is it just an incompetence thing? It could both be true.

Maintenance of our technology is moving further and further away from us.

So if you look at the vacancies, the job vacancies, telecommunications companies, they’re not hiring anything, anyone that does anything with radio networks.

Our own skills are wilting. We are no longer able to control our own infrastructure. We need help from around the world to just keep the communications working.

And that is the current situation. But now imagine this in wartime, it’s all terrible.

Why did it happen? Non-technical people have made choices and have optimized for stuff being cheap. Or at least not hassle. And that’s only going to be fixed if we have more technical thinking going on.

But I have no solutions for making that happen.

And with that, I’m afraid I have no more slides to cheer you up, and I want to thank you very much for your attention.

Permalink
May 19, 2024 at 12:24:12 AM GMT+2

Pivot to AI: Pay no attention to the man behind the curtain – Amy Castorhttps://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

  • Artificial intelligence
  • Crypto Currency
  • Enshitification
  • Artificial intelligence
  • Crypto Currency
  • Enshitification

Pivot to AI: Pay no attention to the man behind the curtain

hal9000

By Amy Castor and David Gerard the September 12, 2023 for AmyCastor

“all this talk of AI xrisk has the stink of marketing too. Ronald McDonald telling people that he has a bunker in New Zealand because the new burger they’re developing in R&D might be so delicious society will crumble.”

— Chris Martin

Crypto’s being dull again — but thankfully, AI has been dull too. The shine is coming off. So we’re back on the AI beat.

The AI winter will be privatized

Since the buzzword “artificial intelligence” was coined in the 1950s, AI has gone through several boom and bust cycles.

A new technological approach looks interesting and gets a few results. It gets ridiculously hyped up and lands funding. The tech turns out to be not so great, so the funding gets cut. The down cycles are called AI winters.

Past AI booms were funded mainly by the US Department of Defense. But the current AI boom has been almost completely funded by venture capital.

The VCs who spent 2021 and 2022 pouring money into crypto startups are pivoting to AI startups, because people buy the idea that AI will change the world. In the first half of 2023, VCs invested more than $40 billion into AI startups, and $11 billion just in May 2023. This is even as overall VC funding for startups dropped by half in the same period from the year before. [Reuters; Washington Post]

The entire NASDAQ is being propped up by AI. It’s one of the only fields that is still hiring.

In contrast, the DOD only requested $1.8 billion for AI funding in its 2024 budget. [DefenseScoop]

So why are VCs pouring money into AI?

Venture capital is professional gambling. VCs are looking for a liquidity event. One big winner can pay for a lot of failures.

Finding someone to buy a startup you’ve funded takes marketing and hype. The company doing anything useful, or anything that even works, is optional.

What’s the exit plan for AI VCs? Where’s the liquidity event? Do they just hope the startups they fund will do an initial public offering or just get acquired by a tech giant before the market realizes AI is running out of steam?

We’re largely talking about startups whose business model is sending queries to OpenAI.

At least with “Web3,” the VCs would just dump altcoins on retail investors via their very good friends at Coinbase. But with AI, we can’t see an obvious exit strategy beyond finding a greater fool.

Pay no attention to the man behind the curtain

The magical claim of machine learning is that if you give the computer data, the computer will work out the relations in the data all by itself. Amazing!

In practice, everything in machine learning is incredibly hand-tweaked. Before AI can find patterns in data, all that data has to be tagged, and output that might embarrass the company needs to be filtered.

Commercial AI runs on underpaid workers in English-speaking countries in Africa creating new training data and better responses to queries. It’s a painstaking and laborious process that doesn’t get talked about nearly enough.

The workers do individual disconnected actions all day, every day — so called “tasks” — working for companies like Remotasks, a subsidiary of Scale AI, and doing a huge amount of the work behind OpenAI.

AI doesn’t remove human effort. It just makes it much more alienated.

There’s an obvious hack here. If you are an AI task worker, your goal is to get paid as much as possible without too much effort. So why not use some of the well-known tools for this sort of job? [New York]

Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.

Remember, the important AI use case is getting venture capital funding. Why buy or rent expensive computing when you can just pay people in poor countries to fake it? Many “AI” systems are just a fancier version of the original Mechanical Turk.

Facebook’s M from 2017 was an imitation of Apple’s Siri virtual assistant. The trick was that hard queries would be punted to a human. Over 70% of queries ended up being answered by a human pretending to be the bot. M was shut down a year after launch.

Kaedim is a startup that claims to turn two-dimensional sketches into 3-D models using “machine learning.” The work is actually done entirely by human modelers getting paid $1-$4 per 15-minute job. But then, the founder, Konstantina Psoma, was a Forbes 30 Under 30. [404 Media; Forbes]

The LLM is for spam

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge]

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot]

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

Supertoys last all summer long

End users don’t actually want AI-based products. Machine learning systems can generate funny text and pictures to show your friends on social media. But even that’s wearing thin — users mostly see LLM output in the form of spam.

LLM writing style and image generator drawing style are now seen as signs of low quality work. You can certainly achieve artistic quality with AI manipulation, as in this music video — but even this just works on its novelty value. [YouTube]

For commercial purposes, the only use case for AI is still to replace quality work with cheap ersatz bot output — in the hope of beating down labor costs.

Even then, the AI just isn’t up to the task.

Microsoft put $10 billion into OpenAI. The Bing search engine added AI chat — and it had almost no effect on user numbers. It turns out that search engine users don’t want weird bot responses full of errors. [ZDNet]

The ChatGPT website’s visitor numbers went down 10% in June 2023. LLM text generators don’t deliver commercial results, and novelty only goes so far. [Washington Post]

After GPT-3 came out, OpenAI took three years to make an updated version. GPT-3.5 was released as a stop-gap in October 2022. Then GPT-4 finally came out in March 2023! But GPT-4 turns out to be eight instances of GPT-3 in a trenchcoat. The technology is running out of steam. [blog post; Twitter, archive]

Working at all will be in the next version

The deeper problem is that many AI systems simply don’t work. The 2022 paper “The fallacy of AI functionality” notes that AI systems are often “constructed haphazardly, deployed indiscriminately, and promoted deceptively.”

Still, machine learning systems do some interesting things, a few of which are even genuinely useful. We asked GitHub and they told us that they encourage their own employees to use the GitHub Copilot AI-based autocomplete system for their own internal coding — with due care and attention. We know of other coders who find Copilot to be far less work than doing the boilerplate by hand.

(Though Google has forbidden its coders from using its AI chatbot, Bard, to generate internal code.) [The Register]

Policy-makers and scholars — not just the media — tend to propagate AI hype. Even if they try to be cautious, they may work in terms of ethics of deployment, and presume that the systems do what they’re claimed to do — when they often just don’t.

Ethical considerations come after you’ve checked basic functionality. Always put functionality first. Does the system work? Way too often, it just doesn’t. Test and measure. [arXiv, PDF, 2022]

AI is the new crypto mining

In 2017, the hot buzzword was “blockchain” — because the price of bitcoin was going up. Struggling businesses would add the word “blockchain” to their name or their mission statement, in the hope their stock price would go up. Long Island Iced Tea became Long Blockchain and saw its shares surge 394%. Shares in biotech company Bioptix doubled in price when it changed its name to Riot Blockchain and pivoted to bitcoin mining. [Bloomberg, 2017, archive; Bloomberg, 2017, archive]

The same is now happening with AI. Only it’s not just the venture capitalists — even the crypto miners are pivoting to AI.

Bitcoin crashed last year and crypto mining is screwed. As far as we can work out, the only business plan was to get foolish investors’ money during the bubble, then go bankrupt.

In mid-2024, the bitcoin mining reward will halve again. So the mining companies are desperate to find other sources of income.

Ethereum moved to proof of stake in September 2022 and told its miners to just bugger off. Ethereum was mined on general-purpose video cards — so miners have a glut of slightly-charred number crunching machinery.

Hive Blockchain in Vancouver is pivoting to AI to repurpose its pile of video cards. It’s also changed its name to Hive Digital Technologies. [Bloomberg, archive; press release]

Marathon Digital claims that “over time you’re going to see that blockchain technologies and AI have a very tight coupling.” No, us neither. Marathon is doubling and tripling down on bitcoin mining — but, buzzwords! [Decrypt]

Nvidia makes the highest-performance video cards. The GPU processors on these cards turn out to be useful for massively parallel computations in general — such as running the calculations needed to train machine learning models. Nvidia is having an excellent year and its market cap is over $1 trillion.

So AI can take over from crypto in yet another way — carbon emissions from running all those video cards.

AI’s massive compute load doesn’t just generate carbon — it uses huge amounts of fresh water for cooling. Microsoft’s water usage went up 34% between 2021 and 2022, and they blame AI computation. ChatGPT uses about 500 mL of water every time you have a conversation with it. [AP]

We don’t yet have a Digiconomist of AI carbon emissions. Go start one.

Permalink
March 28, 2024 at 10:35:34 PM GMT+1

Cybersecurity is brokenhttps://crankysec.com/blog/broken/

  • Cybersecurity
  • Enshitification
  • Cybersecurity
  • Enshitification

Cybersecurity is broken

27 March 2024

It is a well-known fact that we dish out a whole lot of shit talk around these parts. And by "we" I mean me, but that's beside the point. Talking smack about 10-ply LinkedIn vCISOs is, quite honestly, pretty easy and kind of satisfying because some 8 out of 10 times they are stupid as fuck and deserve the heckling. The remaining 2 out of 10 are maybe trying to fight the good fight, and do right by their teams. Maybe. Don't you quote me on that figure. Actually, best you don't quote me at all because there are peeps out there saying things that are much more clever. Take this quote(?) from one Bob Metcalfe (tks, snowcrasher!)

"The Stockings Were Hung by the Chimney with Care"

The ARPA Computer Network is susceptible to security violations for at least
the three following reasons:

(1) Individual sites, used to physical limitations on machine access, have
not yet taken sufficient precautions toward securing their systems
against unauthorized remote use. For example, many people still use
passwords which are easy to guess: their fist names, their initials,
their host name spelled backwards, a string of characters which are
easy to type in sequence (e.g. ZXCVBNM).

(2) The TIP allows access to the ARPANET to a much wider audience than
is thought or intended. TIP phone numbers are posted, like those
scribbled hastily on the walls of phone booths and men's rooms. The
TIP required no user identification before giving service. Thus,
many people, including those who used to spend their time ripping off
Ma Bell, get access to our stockings in a most anonymous way.

(3) There is lingering affection for the challenge of breaking
someone's system. This affection lingers despite the fact that
everyone knows that it's easy to break systems, even easier to
crash them.

All of this would be quite humorous and cause for raucous eye
winking and elbow nudging, if it weren't for the fact that in
recent weeks at least two major serving hosts were crashed
under suspicious circumstances by people who knew what they
were risking; on yet a third system, the system wheel password
was compromised -- by two high school students in Los Angeles
no less.

We suspect that the number of dangerous security violations is
larger than any of us know is growing. You are advised
not to sit "in hope that Saint Nicholas would soon be there".

That's from 1973. The dude who invented Ethernet was worried about what we now call cybersecurity fifty fucking years ago. Several wake-up calls happened since then: phreaking peeps exploding the phones, hacker supergroups testifying in front of the US Senate on the topic of cybersecurity, hacker supergroups releasing super easy to use RATs, a cornucopia of malware, including shit made by nation-states, and ransomware attacks that are only profitable because some people just decided that an inefficient distributed database was worth some money. A lot of those issues were only made possible by people's insistence on using a programming language from half a century ago when better options are available. And that's just the technical side of things.

Take, for example, the Pen Test Partners' research on Peloton's API security. The researchers were able to grab a whole bunch of data that was supposed to be private, disclosed the issue to Peloton who, in turn, basically ghosted the researcher until a TechCrunch reporter got involved. Classic case of "we're not sorry we suck at protecting our customers' data, we're just sorry we got caught." I mean, if you need to get fucking TechCrunch involved to be taken seriously, the situation is hopeless.

Absolutely no amount of gentle pleas disguised as executive orders from the White House urging people to use memory-safe languages will solve the problem. CISA, despite all the phenomenal work they do, can't charge people who mishandle data with negligence; critical infrastructure involved or not. And maybe they should.

You see, cybersecurity is broken because of the lack of consequences. It's really that simple. When literally nothing happens when some stupid service gets popped and loses your data they had no business collecting in the first place, this kind of thing will happen over and over and over again. Why the fuck do you need my home address just so I can copy and paste some GIFs? Because you want to sell this data to data brokers, and you know there will be absolutely no negative consequences if you mishandle this data, fucking over the people who keep your business afloat. So, companies big and small fuck things up and we need to clean up the mess and face the consequences. Sounds about right.

Cybersecurity is even more broken when these companies that face zero consequences look at their payroll and think "Wait a fucking minute! Why the hell are we spending six full dollars a year on cybersecurity when we can, I dunno, do nothing at all for free because cybersecurity incidents will not negatively impact our bottomline whatsoever?" That's why you, my cybersecurity and infosec brethren, are getting laid off. That's why you don't have the tools you need. That's why you don't get the training you should. That's why you're overworked. That's why you're stuck as an underpaid individual contributor doing the work of 5 people for $75k/year while your CISO who makes $500k is on LinkedIn all day writing stupid shit about AI.

Cybersecurity is broken because fixing it benefits no one but the regular, unremarkable, salt of the earth folks. And, according to the powers that be, fuck them folks. Fixing it requires strong data protection laws, but passing laws is just not something the overwhelming majority of legislative bodies in the world do. Passing laws that slightly inconvenience multi-billion dollar corporations while improving the lives of the plebes is even more of a tall order. And that's true for a whole lot of things that have nothing to do with cybersecurity, but this is a blog about cybersecurity, so please bear with me.

That's the answer: data protection laws. You get my data for rEaSoNs, and you fuck it up? You should pay a hefty price that cannot be written off as the cost of doing business. You make data brokers illegal, or, at the very least, way less profitable. You do what the payment card industry has been doing for decades: you tell everyone handling your data that they have to follow a very comprehensive set of data protection rules, lest they get fined or cut off entirely. A group of four credit card companies can do that, so I'm sure mighty governments can, too.

But how do we push things in the right direction? Well, that's one of the many topics we discuss in our Discord server (Hey you guys!). Not only are my fellow Crankies inspiring the shit out of me every day, we have bigger plans than just shitposting and commiserating. Turns out that buying a congressperson lobbying is not that expensive, really. We are working on something that we hope will help lift everyone in this industry up. As I once wrote on that very Discord: "When we abstain from using our collective power of influence, we lose by default." Or "you miss 100% of the shots you don't take" or whatever the fuck Gretzky said. We're about 700-strong and planning on doing great things. Come join us because the movement cannot be stopped.

Previous: Pigeons As Far As The Eye Can See

Permalink
March 28, 2024 at 10:32:11 PM GMT+1

Losing the imitation gamehttps://jenniferplusplus.com/losing-the-imitation-game/

  • Artificial intelligence
  • Artificial intelligence

Losing the imitation game

AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

If you've been anywhere near major news or social media in the last few months, you've probably heard repeatedly about so-called AI, ChatGPT, and large language models (LLMs). The hype surrounding these topics has been intense. And the rhetoric has been manipulative, to say the least. Proponents have claimed that their models are or soon will be generally intelligent, in the way we mean humans are intelligent. They're not. They've claimed that their AI will eliminate whole categories of jobs. And they've claimed that developing these systems further and faster is both necessary and urgent, justified by science fiction dressed up as arguments for some sort of "safety" that I find to be incoherent.

The outer layer of hype surrounding AI—and LLM chatbots in particular—is that they will become indispensable tools of daily work, and entirely replace people in numerous categories of jobs. These claims have included the fields of medicine, law, and education, among others. I think it's nonsense. They imagine self-teaching classrooms and self-diagnosing fitness gadgets. These things will probably not even work as well as self-driving cars, which is to say: only well enough to be dangerous. Of course, that's not stopping people from pushing these fantasies, anyway. But these fields are not my area of expertise. My expertise is in software engineering. We should know better, but software developers are falling victim to the same kind of AI fantasies.

A computer can never be held accountable. Therefore, a computer must never make a management decision.

While the capabilities are fantasy, the dangers are real. These tools have denied people jobs, housing, and welfare. All erroneously. They have denied people bail and parole, in such a racist way it would be comical if it wasn't real. And the actual function of AI in all of these situations is to obscure liability for the harm these decisions cause.

So-Called AI

Artificial Intelligence is an unhelpful term. It serves as a vehicle for people's invalid assumptions. It hand-waves an enormous amount of complexity regarding what "intelligence" even is or means. And it encourages people confuse concepts like cognition, agency, autonomy, sentience, consciousness, and a host of related ideas. However, AI is the vernacular term for this whole concept, so it's the one I'll use. I'll let other people push that boulder, I'm here to push a different one.

Those concepts are not simple ideas, either. Describing them gets into hard questions of psychology, neurology, anthropology, and philosophy. At least. Given that these are domains that the tech field has routinely dismissed as unimportant for decades, maybe it shouldn't be surprising that techies as a group are now completely unprepared to take a critical view of claims about AI.

The Turing Test

Certainly part of how we got here is the Turing test. That is, the pop science reduction of Alan Turing's imitation game. The actual proposal is more substantial. And taking it seriously produces some interesting reading. But the common notion is something like a computer is intelligent if it can reliably pass as human in conversation. I hope seeing it spelled out like that makes it clear how dramatically that overreaches. Still, it's the framework that people have, and it informs our situation. I think the bit that is particularly informative is the focus on natural, conversational language. And also, the deception inherent in the imitation game scenario, but I'll come back to that.

Our understanding of intelligence is a moving target. We only have one meaningful fixed point to work from. We assert that humans are intelligent. Whether anything else is, is not certain. What intelligence itself is, is not certain. Not too long ago, a lot of theory rested on our ability to create and use tools. But then that ability turned out to be not as rare as we thought, and the consensus about the boundaries of intelligence shifted. Lately, it has fallen to our use of abstract language. That brings us back to AI chatbots. We suddenly find ourselves confronted with machines that seem to have a command of the English language that rivals our own. This is unfamiliar territory, and at some level it's reasonable that people will reach for explanations and come up with pop science notions like the Turing test.

Language: any system of formalized symbols, signs, sounds, gestures, or the like used or conceived as a means of communicating thought, emotion, etc.

Language Models

ChatGPT and the like are powered by large language models. Linguistics is certainly an interesting field, and we can learn a lot about ourselves and each other by studying it. But language itself is probably less than you think it is. Language is not comprehension, for example. It's not feeling, or intent, or awareness. It's just a system for communication. Our common lived experiences give us lots of examples that anything which can respond to and produce common language in a sensible-enough way must be intelligent. But that's because only other people have ever been able to do that before. It's actually an incredible leap to assume, based on nothing else, that a machine which does the same thing is also intelligent. It's much more reasonable to question whether the link we assume exists between language and intelligence actually exists. Certainly, we should wonder if the two are as tightly coupled as we thought.

That coupling seems even more improbable when you consider what a language model does, and—more importantly—doesn't consist of. A language model is a statistical model of probability relationships between linguistic tokens. It's not quite this simple, but those tokens can be thought of as words. They might also be multi-word constructs, like names or idioms. You might find "raining cats and dogs" in a large language model, for instance. But you also might not. The model might reproduce that idiom based on probability factors instead. The relationships between these tokens span a large number of parameters. In fact, that's much of what's being referenced when we call a model large. Those parameters represent grammar rules, stylistic patterns, and literally millions of other things.

What those parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't know things. It models how those tokens are used. When the model works on a token like "Jennifer", there are parameters and classifications that capture what we would recognize as things like the fact that it's a name, it has a degree of formality, it's feminine coded, it's common, and so on. But the model doesn't know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.

Mental Models

So, a language model can reproduce patterns of language. And there's no particular reason it would need to be constrained to natural, conversational language, either. Anything that's included in the set of training data is fair game. And it turns out that there's been a lot of digital ink spent on writing software and talking about writing software. Which means those linguistic patterns and relationships can be captured and modeled just like any other. And sure, there are some programming tasks where just a probabilistic assembly of linguistic tokens will produce a result you want. If you prompt ChatGPT to write a python function that fetches a file from S3 and records something about it in DynamoDB, I would bet that it just does, and that the result basically works. But then, if you prompt ChatGPT to write an authorization rule for a new role in your application's proprietary RBAC system, I bet that it again just does, and that the result is useless, or worse.

Programming as Theory Building

Non-trivial software changes over time. The requirements evolve, flaws need to be corrected, the world itself changes and violates assumptions we made in the past, or it just takes longer than one working session to finish. And all the while, that software is running in the real world. All of the design choices taken and not taken throughout development; all of the tradeoffs; all of the assumptions; all of the expected and unexpected situations the software encounters form a hugely complex system that includes both the software itself and the people building it. And that system is continuously changing.

The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.

To circle back to AI like ChatGPT, recall what it actually does and doesn't do. It doesn't know things. It doesn't learn, or understand, or reason about things. What it does is probabilistically generate text in response to a prompt. That can work well enough if the context you need to describe the goal is so simple that you can write it down and include it with the prompt. But that's a very small class of essentially trivial problems. What's worse is there's no clear boundary between software development problems that are trivial enough for an LLM to be helpful vs being unhelpful. The LLM doesn't know the difference, either. In fact, the LLM doesn't know the difference between being tasked to write javascript or a haiku, beyond the different parameters each prompt would activate. And it will readily do a bad job of responding to either prompt, with no notion that there even is such a thing as a good or bad response.

Software development is complex, for any non-trivial project. But complexity is hard. Overwhelmingly, when we in the software field talk about developing software, we've dealt with that complexity by ignoring it. We write code samples that fit in a tweet. We reduce interviews to trivia challenges about algorithmic minutia. When we're feeling really ambitious, we break out the todo app. These are contrivances that we make to collapse technical discussions into an amount of context that we can share in the few minutes we have available. But there seem to be a lot of people who either don't understand that or choose to ignore it. They frame the entire process of software development as being equivalent to writing the toy problems and code samples we use among general audiences.

Automating the Easy Part

The intersection of AI hype with that elision of complexity seems to have produced a kind of AI booster fanboy, and they're making personal brands out of convincing people to use AI to automate programming. This is an incredibly bad idea. The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do. Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.

[A programmer's] education has to emphasize the exercise of theory building, side by side with the acquisition of knowledge of data processing and notations.

But it's worse than AI being merely inadequate for software development. Developing that mental model requires learning about the system. We do that by exploring it. We have to interact with it. We manipulate and change the system, then observe how it responds. We do that by performing the easy, simple programing tasks. Delegating that learning work to machines is the tech equivalent of eating our seed corn. That holds true beyond the scope of any team, or project, or even company. Building those mental models is itself a skill that has to be learned. We do that by doing it, there's not another way. As people, and as a profession, we need the early career jobs so that we can learn how to do the later career ones. Giving those learning opportunities to computers instead of people is profoundly myopic.

Imitation Game

If this is the first time you're hearing or reading these sentiments, that's not too surprising. The marketing hype surrounding AI in recent months has been intense, pervasive, and deceptive. AI is usually cast as being hyper competent, and superhuman. To hear the capitalists who are developing it, AI is powerful, mysterious, dangerous, and inevitable. In reality, it's almost none of those things. I'll grant that AI can be dangerous, but not for the reasons they claim. AI is complicated and misunderstood, and this is by design. They cloak it in rhetoric that's reminiscent of the development of atomic weapons, and they literally treat the research like an arms race.

I'm sure there are many reasons they do this. But one of the effects it has is to obscure the very mundane, serious, and real harms that their AI models are currently perpetuating. Moderating the output of these models depends on armies of low paid and precariously employed human reviewers, mostly in Kenya. They're subjected to the raw, unfiltered linguistic sewage that is the result of training a language model on uncurated text found on the public internet. If ChatGPT doesn't wantonly repeat the very worst of the things you can find on reddit, 4chan, or kiwi farms, that is because it's being dumped on Kenyan gig workers instead.

That's all to say nothing of the violations of intellectual property and basic consent that was required to train the models in the first place. The scale of the theft and exploitation required to build the data sets these models train with is almost inconceivable. And the energy consumption and e-waste produced by these systems is staggering.

All of this is done to automate the creation of writing or media that is designed to deceive people. It's intended to seem like people, or like work done by people. The deception, from both the creators and the AI models themselves, is pervasive. There may be real, productive uses for these kinds of tools. There may be ways to build and deploy them ethically and sustainably. But that's not the situation with the instances we have. AI, as it's been built today, is a tool to sell out our collective futures in order to enrich already wealthy people. They like to frame it as being akin to nuclear science. But we should really see it as being more like fossil fuels

Permalink
March 5, 2024 at 11:13:32 PM GMT+1

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet - ABC Newshttps://www.abc.net.au/news/science/2024-02-28/twitter-x-fighting-bot-problem-as-ai-spam-floods-the-internet/103498070

  • Social Network
  • Artificial intelligence
  • Societal Collapse
  • Social Network
  • Artificial intelligence
  • Societal Collapse

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet

ABC Science / By technology reporter James Purtill

Parts of the web are now dominated by bots and junk websites designed to go unread by humans.

One morning in January this year, marine scientist Terry Hughes opened X (formerly Twitter) and searched for tweets about the Great Barrier Reef.

"I keep an eye on what's being tweeted about the reef every day," Professor Hughes, a leading coral researcher at James Cook University, said.

What he found that day surprised and confused him; hundreds of bot accounts tweeting the same strange message with slightly different wording.

"Wow, I had no idea that agricultural runoff could have such a devastating impact on the Great Barrier Reef," one account, which otherwise spruiked cryptocurrencies, tweeted.

Another crypto bot wrote: "Wow, it's disheartening to hear about the water pollution challenges Australia faces."

And so on. Hundreds of crypto accounts tweeting about agricultural runoff.

A month later, it happened again. This time, bots were tweeting about "marine debris" threatening the Great Barrier Reef.

What was going on?

When Professor Hughes tweeted what he'd found, some saw a disinformation conspiracy, an attempt to deflect attention from climate change.

The likely answer, however, is more mundane, but also more far-reaching in its implications.

More than a year since Elon Musk bought X with promises to get rid of the bots, the problem is worse than ever, experts say.

And this is one example of a broader problem affecting online spaces.

The internet is filling up with "zombie content" designed to game algorithms and scam humans.

It's becoming a place where bots talk to bots, and search engines crawl a lonely expanse of pages written by artificial intelligence (AI).

Junk websites clog up Google search results. Amazon is awash with nonsense e-books. YouTube has a spam problem.

And this is just a trickle in advance of what's been called the "great AI flood".

Bots liking bots, talking to other bots

But first, let's get back to those reef-tweetin' bots.

Timothy Graham, an expert on X bot networks at the Queensland University of Technology, ran the tweets through a series of bot and AI detectors.

Dr Graham found 100 per cent of the text was AI-generated.

"Overall, it appears to be a crypto bot network using AI to generate its content," he said.

"I suspect that at this stage it's just trying to recruit followers and write content that will age the fake accounts long enough to sell them or use them for another purpose."

That is, the bots probably weren't being directed to tweet about the reef in order to sway public opinion.

Dr Graham suspects these particular bots probably have no human oversight, but are carrying out automated routines intended to out-fox the bot-detection algorithms.

Searching for meaning in their babble was often pointless, he said.

"[Professor Hughes] is trying to interpret it and is quite right to try and make sense of it, but it just chews up attention, and the more engagement they get, the more they are rewarded.

The cacophony of bot-talk degrades the quality of online conversations. They interrupt the humans and waste their time.

"Here's someone who is the foremost research scientist in this space, spending their time trying to work out the modus operandi of these accounts."

In this case, the bots were replying to the tweet of another bot, which, in turn, replied to the tweets of other bots, and so on.

One fake bot account was stacked on top of the other, Dr Graham said.

"It's AI bots all the way down."

How bad is X's bot problem?

In January, a ChatGPT glitch appeared to shine a light on X's bot problem.

For a brief time, some X accounts posted ChatGPT's generic response to requests that it deems outside of its content policy, exposing them as bots that use ChatGPT to generate content.

Users posted videos showing scrolling feeds with numerous accounts stating "I'm sorry, but I cannot provide a response to your request as it goes against OpenAl's content policy."

"Twitter is a ghost town," one user wrote.

But the true scale of X's bot problem is difficult for outsiders to estimate.

Shortly after Mr Musk gained control of X while complaining about bots, X shut down free access to the programming interface that allowed researchers to study this problem.

That left researchers with two options: pay X for access to its data or find another way to peek inside.

Towards the end of last year, Dr Graham and his colleagues at QUT paid X $7,800 from a grant fund to analyse 1 million tweets surrounding the first Republican primary debate.

They found the bot problem was worse than ever, Dr Graham said at the time.

Later studies support this conclusion. Over three days in February, cybersecurity firm CHEQ tracked the proportion of bot traffic from X to its clients' websites.

It found three-quarters of traffic from X was fake, compared to less than 3 per cent of traffic from each of TikTok, Facebook and Instagram.

"Terry Hughes' experience is an example of what's going on on the platform," Dr Graham said.

"One in 10 likes are from a porn bot, anecdotally."

The rise of a bot-making industry

So what's the point of all these bots? What are they doing?

Crypto bots drive up demand for certain coins, porn bots get users to pay for porn websites, disinformation bots peddle fake news, astroturfing bots give the impression of public support, and so on.

Some bots exist purely to increase the follower counts and engagement statistics of paying customers.

A sign of the scale of X's bot problem is the thriving industry in bot-making.

Bot makers from around the world advertise their services on freelancer websites.

Awais Yousaf, a computer scientist in Pakistan, sells "ChatGPT Twitter bots" for $30 to $500, depending on their complexity.

In an interview with the ABC, the 27-year-old from Gujranwala said he could make a "fully fledged" bot that could "like comments on your behalf, make comments, reply to DMs, or even make engaging content according to your specification".

Mr Yousaf's career tracks the rise of the bot-making economy and successive cycles of internet hype.

Having graduated from university five years ago, he joined Pakistan's growing community of IT freelancers from "very poor backgrounds".

Many of the first customers wanted bots to promote cryptocurrencies, which were booming in popularity at the time.

"Then came the NFT thing," he said.

A few years ago he heard about OpenAI's GPT3 language model and took a three-month break to learn about AI.

"Now, almost 90 per cent of the bots I do currently are related to AI in one way or another.

"It can be as simple as people posting AI posts regarding fitness, regarding motivational ideas, or even cryptocurrency predictions."

In five years he's made 120 Twitter bots.

Asked about Mr Musk's promise to "defeat the spam bots," Mr Yousaf smiled.

"It's hard to remove Twitter bots from Twitter because Twitter is mostly bot."

AI-generated spam sites may overwhelm search engines

X's bot problem may be worse than other major platforms, but it's not alone.

A growing "deluge" of AI content is flooding platforms that were "never designed for a world where machines can talk with people convincingly", Dr Graham said.

"It's like you're running a farm and had never heard of a wolf before and then suddenly you have new predators on the scene.

"The platforms have no infrastructure in place. The gates are open."

The past few months have seen several examples of this.

Companies are using AI to rewrite other media outlet's stories, including the ABC's, to then publish them on the company's competing news websites.

A company called Byword claims it stole 3.6 million in "total traffic" from a competitor by copying their site and rewriting 1,800 articles using AI.

"Obituary pirates" are using AI to create YouTube videos of people summarising the obituaries of strangers, sometimes fabricating details about their deaths, in order to capture search traffic.

Authors are reporting what appear to be AI-generated imitations and summaries of their books on Amazon.

Google's search results are getting worse due to spam sites, according to a recent pre-print study by German researchers.

The researchers studies search results for thousands of product-review terms across Google, Bing and DuckDuckGo over the course of a year.

They found that higher-ranked pages tended to have lower text quality but were better designed to game the search ranking algorithm.

"Search engines seem to lose the cat-and-mouse game that is SEO spam," they wrote in the study.

Co-author Matti Wiegman from Bauhaus University, Weimar said this rankings war was likely to get much worse with the advent of AI-generated spam.

"What was previously low-quality content is now very difficult to distinguish from high-quality content," he said.

"As a result, it might become difficult to distinguish between authentic and trustworthy content that is useful and content that is not."

He added that the long-term effects of AI-generated content on search engines was difficult to judge.

AI-generated content could make search more useful, he said.

"One possible direction is that generated content will become better than the low-quality human-made content that dominates some genres in web search, in which case the search utility will increase."

Or the opposite will happen. AI-generated content will overwhelm "vulnerable spaces" such as search engines and "broadcasting-style" social media platforms like X.

In their place, people may turn to "walled gardens" and specialised forums with smaller numbers of human-only members.

Platforms prepare for coming flood

In response to this emerging problem, platforms are trialling different strategies.

Meta recently announced it was building tools to detect and label AI-generated images posted on its Facebook, Instagram and Threads services.

Amazon has limited authors to uploading a maximum of three books to its store each day, although authors say that hasn't solved the problem.

X is trialling a "Not a Bot" program in some countries where it charges new users $1 per year for basic features.

This program operates alongside X's verification system, where users pay $8 per month to have their identity checked and receive a blue tick.

But it appears the bot-makers have found a way around this.

All the reef-tweeting crypto bots Professor Hughes found were verified accounts.

"It's clutter on the platform that's not necessary. You'd wish they'd clean it up," the coral scientist said.

"It wastes everyone's time."

Permalink
March 5, 2024 at 11:06:38 PM GMT+1

Un appel à démanteler l’intelligence artificiellehttps://polaris.imag.fr/romain.couillet/docs/articles/IA_dellusion.pdf

  • Artificial intelligence
  • Societal Collapse
  • Technopaganism
  • Technosolutionism
  • Artificial intelligence
  • Societal Collapse
  • Technopaganism
  • Technosolutionism

Un appel à démanteler l’intelligence artificielle

Romain Couillet le 22 juillet 2022

Professeur en informatique et chercheur jusqu’à récemment en mathématiques appliquées pour l’intelligence artificielle, j’ai été récemment sollicité comme membre du jury de soutenance du projet de fin d’études d’un étudiant en master d’informatique de l’Université Grenoble-Alpes.

L’étudiant motivait son projet par la nécessité de répondre à la double problématique suivante : l’entreprise chez qui il effectuait son stage ne parviendrait d’une part plus à recruter d’expert·es en design de circuits électroniques et, par ailleurs, la pénurie de métaux impose des contraintes croissantes sur la dimension (et donc la quantité de matière sollicitée) de ces mêmes circuits électroniques.

Face à ces enjeux, l’entreprise a proposé de développer un algorithme d’intelligence artificielle capable de combler l’expertise perdue (et de faire potentiellement mieux).
Sans aller dans les détails conceptuels de l’étude menée par l’étudiant, il apparaissait assez rapidement au cours de la présentation que l’approche proposée ne pouvait pas fonctionner et qu’en réalité il était fort présomptueux d’imaginer qu’un algorithme puisse effectuer la tâche souhaitée.

Le bilan des quatre premiers mois de stage n’était donc pas surprenant : en l’état, du point de vue de l’étudiant, la méthode développée ne fonctionnait pas encore mais les travaux étaient prometteurs. Situation somme toute classique que je rencontre de plus en plus fréquemment ces dernières années. Mais le point saillant de cette anecdote est ailleurs : au cours de la séance de questions du jury, j’ai demandé à l’étudiant si, à l’issue de ces quatre premiers mois clairement infructueux quoi qu’il en dise, il envisageait comme hypothèse la possibilité que l’intelligence artificielle puisse ne pas être en mesure de résoudre son problème.

En dépit du caractère éminemment rhétorique de la question, sa réponse sera extrêmement révélatrice : “non, je suis sûr que ça fonctionnera”. Tout est là. À elle seule, cette affirmation met selon moi en lumière le danger le plus nocif de l’intelligence artificielle : le caractère religieux et aveuglant qu’elle a acquis en à peine dix ans.

Avant propos : un détour paléosophique

Dans son travail sur la Paléosophie, Catherine Reine Thomas nous invite à repenser la cosmologie occidentale, celle qui a fait naître les premières civilisations agricoles il y a dix mille ans pour devenir la société dans laquelle nous européen·nes vivons, comme un déséquilibre dans le rapport entre la “Vie” et la “Technique”.

L’une des singularités du genre Homo, et plus particulièrement d’Homo Sapiens, par rapport aux autres animaux sont sa capacité et son envie de développer des outils et des savoir-faire. L’ensemble de ces outils engendre un spectre de technologies qui alimente alors une entité non-vivante mais bien réelle et dynamique que Thomas nomme Technique.

L’animalité d’Homo Sapiens exigerait néanmoins, au même titre que les autres animaux, de conserver un contrôle sur son milieu, une puissance d’agir pour emprunter les termes de Spinoza, que Thomas appelle la Vie.

Les entités Technique et Vie entrent dès lors dans une compétition intérieure dont l’équilibre est maintenu par l’activité artistique : Homo Sapiens n’est pas esclave d’une technologie tant que l’outil ou la technique garde un ancrage dans le champ de la Vie. Les dessins et sculptures (dès les premières peintures rupestres et objets décoratifs) sont autant d’expressions vivantes de techniques et savoir-faire utilisés par ailleurs, mais non exclusivement, pour le besoin des outils.

Cette dualité stable entre Vie et Technique assurerait un lien sain et serein entre Homo Sapiens et son environnement. Dans son ethnographie du peuple Achuar d’Amazonie qui entretient une relation animiste au milieu [^1], Descola rapporte que les Achuar ne travaillent que quatre à cinq heures par jour (aux champs, à la chasse, à l’artisanat) pour dédier le plus clair de leur temps aux chants, à la confection d’ornements, aux pratiques spirituelles et autres activités artistiques.

Selon Thomas, la Technique, vue comme entité non-vivante mais symbiotique au vivant Homo Sapiens peut devenir parasitaire lorsqu’elle prend le pas sur la Vie : c’est-à-dire lorsque le contrôle artistique par Homo Sapiens disparait au profit d’un développement non maîtrisé des technologies. Dès lors, la Technique se nourrit de l’énergie métabolique d’Homo Sapiens pour devenir autonome, tels les robots de l’univers dystopique d’Isaac Asimov difficilement tenus sous le contrôle des trois lois de la robotique.

Cet angle de vue paléosophique par la dualité Vie-Technique est intéressant à plusieurs égards : d’une part, il rejette le fatalisme, grandement véhiculée par les best-seller Sapiens d’Harari ou Effondrement de Diamond, d’une espèce humaine prédatrice et vouée à l’auto-destructrion. L’espèce humaine serait au contraire sous le joug de la Technique qui cultive Homo Sapiens comme les termites cultivent les champignons qui digèrent pour eux la cellulose du bois.

Il permet d’autre part d’expliquer l’inexplicable : pourquoi Homo Sapiens, en dépit des évidences scientifiques, des solutions technologiques existantes (permaculture, outillage low-tech, communs) et des connaissances neuropsychologiques et sociales disponibles (lien rompu au vivant, bienfaits des pratiques écopsychologiques, évidence d’une entraide perdue mais gravée dans notre ADN) ne parvient pas à se défaire de ses technologies mortifères et de son comportement agressif envers le vivant, colonial et extractiviste ? L’analyse paléosophique résoud aussi le verrouillage du triangle des responsabilités entre citoyen, gouvernement et entreprise qui veut que les citoyen·nes reprochent l’inaction gouvernementale et la vénalité des entreprises, tandis que les entreprises n’ont de choix que de s’aligner aux contraintes gouvernementales et ne sont pas responsables des besoins compulsifs des citoyens, et que les gouvernements défendent leurs actions comme conséquences du vote citoyen et des pressions économiques des entreprises.

En somme, tout le monde est responsable et personne ne l’est. Ici Catherine Reine Thomas avancerait sûrement que la Technique, ignorée dans ce triptyque, porte en réalité le poids de la chaîne deresponsabilités : elle assujettit le citoyen dans la réalisation de ses besoins techniques, maintient la survie de l’entreprise qui n’a de raison d’être qu’en perpétuant l’alimentation technologique et neutralise le gouvernement dans sa nécessité de croissance économique par la technologie.

Difficile de ne pas voir ici une terrible analogie entre la Technique devenue parasite d’une humanité au bord du chaos et l’Ophiocordyceps Unilateralis, ce champignon qui pénètre le corps des fourmis, les incite à monter aussi haut que possible le long de la tige d’une plante charnue avant de finalement s’y accrocher puissamment par les mandibules et de se faire dévorer de l’intérieur par le champignon (qui peut dès lors se développer dans des conditions optimales et propager efficacement ses spores).

Car le développement accéléré des outils et technologies a rompu l’équilibre Technique-Vie, engendrant des conséquences destructrices aux dynamiques exponentielles : agriculture intensive dont les outils et ressources sont aujourd’hui en dehors du contrôle des paysan·nes (machines irréparables, niveaux d’endettement imposant une productivité assurée par l’usage d’engrais de synthèse et pesticides), exigences et conflits de ressources (croissance intenable des populations, guerres, colonisations, génocides et écocides), fracture du lien au vivant (urbanisation et artificialisation des sols, extractivisme minéral, cybernétisation, ontologie naturaliste [^2]), déshumanisation physique et psychologique (maladies de civilisation, épidémies, pertes de sens, troubles neuropsychologiques, dépressions, fractures identitaire et sociale).

Le champignon Technique dévore notre humanité de l’intérieur, par le biais de l’extinction inexorable de l’ensemble du vivant. On retrouve bien là les nombreux dépassements de convivialité de l’outil dans la terminologie d’Ivan Illich [^3] : au delà d’un certain seuil de complexité, l’outil sort du contrôle de l’humain et passe d’un moyen technique à une fin en soi. Cette fin en soi se mue dans le pire des cas en ce qu’Illich appelle un monopole radical qui transforme la société autour de l’outil : il n’est par exemple plus possible de vivre sans camions pour transporter les aliments, sans informatique pour gérer les chaînes logistiques ou les transferts financiers, sans vaccins pour amortir les conséquences de la surpopulation, etc.

La Technique est ainsi devenue religion, fluidifiée par le fétiche de l’argent, qui impose une croyance techno-solutionniste à quelques huit milliards d’Homo Sapiens dépourvus de la capacité de contrôle des technologies (absolument personne ne peut fabriquer ou réparer seule une quelconque technologie numérique moderne) et qui pour beaucoup ont perdu l’équilibrage du moteur de la Vie (perte de sens pour les occidentaux, soumission coloniale pour les habitant·es des pays du Sud [^4]).

À défaut de maîtriser l’outil, Homo Sapiens, désormais dépendant des technologies et de l’énergie fossile qui les nourrit (comme l’explique Jean-Baptiste Fressoz dans L’événement Anthropocène, nous ne pouvons plus vivre sans pétrole), s’en remet religieusement au maintien, à l’entretien et au développement d’un système technique paradoxalement occulté des débats politiques (on ne remet que rarement en question l’utilité des technologies) alors qu’il pèse aujourd’hui cinq fois le poids du vivant [^5].

Le détail de cette courte analyse paléosophique est certainement un peu plus complexe et mérite de s’y appesantir un instant. La production technique moderne s’effectue en effet par le biais d’ingénieur·es et chercheur·ses pour la plupart parfaitement ancré·es dans un équilibre Technique-Vie de la bricoleuse d’équations passionnée ou de l’insassiable manipulateur de tubes à essais.

Mais tous·tes deux vivent, au même titre que l’expert marketing ou la chef d’entreprise, dans autant de tours d’ivoire imperméables aux considérations systémiques complexes (est-ce que l’outil dont je prends une maigre part au développement intègrera un produit potentiellement nocif pour la société ?) et qu’il est supposé sage d’ignorer, le travail scientifique factuel de l’expert·e technique ne lui conférant ni la légitimité ni même l’accès à ces prérogatives réflexives. [^6]

C’est ainsi que les chercheur·ses de mon propre laboratoire, autant de personnes dont j’admire l’extrême intelligence mathématique et l’ensemble de la carrière, se trouvent incapables de rompre avec un domaine dont ils connaissent et reconnaissent la nocivité. Pour retrouver un semblant de sens, on évoque paradoxalement notre illégitimité ou notre incapacité à traîter les questions environnementales (“je préfère laisser les experts s’en charger”, comme si de tels experts existaient vraiment) ou alors la fatalité d’un système verrouillé (“notre équipe travaille sur ce domaine, on ne peut pas faire non plus n’importe quoi”).

Ce sentiment d’emprisonnement amène à des paradoxes proprement délirants, comme c’est le cas des chercheur·ses qui se réfugient dans une activité de recherche poussée volontairement dans une extrémité théorique qui assure qu’elle n’alimentera jamais l’industrie mortifère. En définitive, la société moderne assoit la domination de la Technique sur la Vie paradoxalement par le biais du travail d’une élite minoritaire qui parvient elle, parfois en dépit d’une forte dissonance cognitive, à maintenir son propre équilibre Technique-Vie assuré par la flamme du pouvoir d’agir spinoziste [^7] (et/ou par un attraitfétichiste pour la domination égoïste et l’argent [^8]).

La religion intelligence artificielle

Le cas particulier de l’intelligence artificielle illustre parfaitement mon propos. Suite aux quelques prouesses magiques dans les domaines de la vision assistée par ordinateur et du traitement du langage naturel, auxquelles s’ajoute la victoire jusque là considérée impossible de l’algorithme AlphaGo contre le champion du monde de Go, l’IA, et plus précisément les réseaux de neurones profonds, sont aujourd’hui vus comme un Eldorado, un couteau-suisse en mesure de résoudre tous les problèmes sur lesquels les humains se cassent les dents ou qu’ils n’ont pas les capacités calculatoires ou cognitives de traiter.

Mais comme aucune théorie mathématique n’est en mesure de craquer les mystères des réseaux de neurones profonds [^9] produits par des heures de calculs purement informatiques et très aléatoires (deux instanciations d’un même algorithme donneront lieu à deux réseaux de neurones absolument distincts), l’outil technique “IA” dépasse le contrôle humain, y compris le contrôle expert. C’est ainsi que naissent des situations aussi ubuesques que celle décrite en début d’article et qu’on voit se généraliser : les étudiants formés à l’intelligence artificielle n’ont aucun contrôle sur leur propre outil.

L’enseignement de l’IA tend d’ailleurs à renforcer l’illusion, la magie d’une machine omnipotente et qui nécessite peu d’efforts (aucune théorie mathématique profonde n’est nécessaire et des interfaces logicielles flexibles, telles que TensorFlow, permettent de devenir autonome en quelques heures).

Le triptyque citoyen-gouvernement-industrie aggrave le problème à ses dépends : afin de tenir la dragée haute aux GAFAM, le gouvernement français a récemment lancé un appel “Compétences et Métiers d’Avenir” à la massification des enseignements en IA, dont l’objectif est de doubler l’offre de formation des étudiants post-bac. S’il n’est pas incohérent de vouloir adapter les connaissances universitaires aux savoirs dernièrement acquis, il est important de rappeler que doubler l’offre en IA implique arithmétiquement la disparition d’autres formations, celles dès lors considérées obsolètes ou moins utiles.

C’est dans ce contexte que les designers de circuits électroniques ont disparu et que la béquille de l’“IA couteau-suisse” évoquée en début d’article tente très maladroitement de répondre à un problème mal posé [^10]. L’IA vide donc indirectement les savoirs et les savoir-faire élémentaires, imposant de fait un monopole radical par lequel l’outil numérique porteur des algorithmes de l’IA prend la charge des décisions précédemment établies par l’humain.

Et ce, sans contrôle possible par l’humain, qu’il soit ignorant, novice ou même expert en informatique. Le choix des populations, gouvernements et entreprises d’investir massivement dans l’IA est celui de la pillule bleu présentée à Néo dans le film Matrix : un point de non retour dans l’illusion d’un monde idéalisé contrôlé par des machines capables de tout solutionner, et entre autres de résoudre la dite crise climatique (qu’il serait plus sage pour réellement cerner les enjeux de correctement nommer extinction de masse ou effondrement du vivant).

L’IA ajoute par ailleurs une nouvelle pierre à l’édifice d’abrutissement de la population, dénoncé par Steigler dans La télécratie contre la Démocratie ou par Brighelli dans La fabrique du crétin, qui permet à chacun·e (dans la société occidentale du moins) de vivre dans un monde en apparence dépourvu de contraintes, de nécessité de savoir ou même de comprendre, et dépourvu de conflits, un monde aux ressources toujours supposées infinies par l’économie néo-classique [^11] sur laquelle se basent tous les gouvernements occidentaux. Le plus grand danger de l’IA apparait alors comme une évidence : en renforçant la promesse techno-solutionniste en direction d’une population privée de ses savoirs et savoir-faire, privation renforcée par une dépendance totale à des machines et à des décisions automatiques incontrôlables, l’IA masque un peu plus la réalité d’un système de technologies insoutenables et au bord de l’effondrement.

Ces technologies, que Monnin dans Héritage et fermeture qualifie de zombies, car elles sont en apparence vivantes (elles fonctionnent aujourd’hui et donnent l’impression de toujours pouvoir se développer demain) mais effectivement mortes (du fait de la déplétion matérielle, des pics de ressources énergétiques prochains, mais aussi de leur impact environnemental intenable et qui imposera des régulations fortes sur le moyen terme), sont amenées à disparaitre.

Dans le cas de l’IA, c’est par l’intermédiaire de l’impossible renouvellement de l’infrastructure numérique et de l’insoutenabilité de son coût énergétique que sa disparition s’opèrera assurément. En l’absence d’anticipation, l’ensemble des machines automatisées qui se substituent aujourd’hui à un savoir-faire initialement humain ne pourront plus être alimentées ou réparées, mettant à mal certaines activités essentielles. C’est le cas notamment des machines agricoles que l’industrie 4.0 promet de rendre plus “intelligentes”.

Atelier Paysan, dans son manifeste Reprendre la terre aux machines, alerte précisément sur ce point : le nombre de paysan·nes qui nourrissent une population grandissante n’a de cesse de diminuer 12 au profit de machines automatisées (tracteurs sans conducteur, drones, machines de manipulation des plants), détruisant les savoir-faire humains (les paysan·nes ne sont plus en contact avec la terre) et aggravant la capacité de résilience consécutive aux chocs pétroliers et énergétiques à venir. L’exemple de Cuba, documenté dans la vidéo Comment Cuba survécut en 1990 sans pétrole, et dont la population a dû transiter essentiellement du jour au lendemain d’une vie “à l’américaine” à un retour à la terre non préparé parce que dépourvu des communs élémentaires, permet d’anticiper, dans une ampleur qui sera vraisemblablement incomparable, les conséquences des pénuries énergétiques et matérielles mondiales à venir, dans un écosystème qui plus est en extinction accélérée.

Démanteler l’IA

Dans Héritage et fermeture, Monnin propose une théorie du démantellement des technologies zombies comme articulation nécessaire à la résilience d’une société contrainte en énergie, ressources, et devant s’adapter aux multiples conséquences systémiques de l’écocide en cours (à commencer par l’adaptation au réchauffement climatique). En plus d’être une technologie proprement anti-résilience et donc aux conséquences sociétales particulièrement nocives, l’intelligence artificielle est un parfait candidat à la mise en place du démantellement technologique et du rééquilibrage de la balance Vie-Technique.

En effet, en tant que brique supérieure et encore récente de la forteresse numérique, la perspective d’abandon de l’intelligence artificielle
comme outil constitue à la fois un imaginaire crédible (il y a peu nous vivions sans) et un objectif accessible (il s’agit “juste” de faire table rase de dix années de recherche et l’industrie dans le domaine). Dans l’analyse effectuée par le groupe lownum 13 portant sur la dite low-technicisation du numérique, une première étape dans la mise en place d’un démantellement consiste à identifier et mettre en regard le caractère nécessaire (ou non) de l’outil numérique et le caractère nécessaire (ou non) du service qu’il remplit. Notamment, du fait de sa criticité et du danger de résilience induit, tout outil numérique nécessaire à un besoin nécessaire (voire vital) doit être démantelé au plus vite.

C’est le cas par exemple de l’industrie 4.0 dans le domaine agricole qui,amenée à déposséder les paysan·nes devenu·es technicien·nes numériques de leur maîtrise même de l’agriculture, doit être rapidement décommissionnée. L’exemple de Cuba évoqué plus haut démontre de la même manière que l’intensification de la production d’intrants artificiels (qui exige une utilisation intensive de gaz) en remplacement des nitrates, phosphates et potassium naturels (excrétions humaines et des autres animaux, composts de biomasse) provoquera des famines massives en cas de discontinuité d’approvisionnement énergétique. Ces intrants artificiels, dont les conséquences écologiques sont par ailleurs désastreuses, engendrent au même titre que l’agriculture 4.0 un danger élevé de résilience.

Les technologies zombies de l’intelligence artificielle tendent à s’emparer de secteurs industriels en lien à des besoins vitaux ou assurant tout au moins le maintien de l’équilibre sociétal. C’est le cas notamment du transport de produits, au moyen de véhicules autonomes ou de drones. Un développement massif d’une telle transition logistique serait en mesure, pour des raisons économiques évidentes d’imposer un monopole radical sur les chaînes d’approvisionnement, en particulier alimentaire. Toute rupture soudaine de ces chaînes (pénurie de métaux rares, cyber-attaque) est en capacité de produire des famines si le parc de véhicules non autonomes et le nombre de conducteurs encore disponibles (l’expertise étant rapidement perdue) ne couvrent plus les besoins de distribution. Fort heureusement, en tant que technologie encore immature, la véhiculation autonome est un outil aisément démantelable.

Le cœur de l’outillage de l’intelligence artificielle n’est cependant jusque là pas encore dédié à des besoins indispensables. La vaste majorité des applications de l’IA (en volume d’usage) concerne le tri automatique d’emails, les moteurs de recherche, la traduction automatique, la reconnaissance faciale et d’objets, etc. Il convient pour ces usages non essentiels d’évaluer la pertinence sociétale du besoin au regard de l’intensité énergétique et de l’empreinte environnementale induites par l’ensemble du cycle de vie de l’outil (recherche, développement, commercialisation, usage, fin de vie).

En l’occurrence, les phases de recherche, développement et usages ne sont pas neutres. L’intensité énergétique, la plus simple à évaluer, et qu’il est ensuite facile de convertir en bilan équivalent carbone, a été récemment documentée 14 : l’ensemble de la re-
cherche et du développement des réseaux de neurones profonds s’accompagne d’une très forte consommation électrique, à l’image des dizaines de milliers de cœurs GPU nécessairement pour entrainer un mois durant les réseaux profonds les plus avancés, pour un coût estimé à plusieurs centaines de tonnes équivalent CO2 (rappelons que la consommation annuelle totale d’un·e français·e est de 10 tonnes équivalent CO2).

Évaluer l’impact de l’IA par le simple prisme du bilan carbone est néanmoins tout aussi réducteur que dangereux (car il incite à une fuite en avant dans le développement de nouvelles technologies plus “économes”, prototypiques de l’oxymorique “croissance verte”) : le développement explosif de l’IA se réalise en effet au prix de la production effrénée de puissants serveurs très consommateurs en énergie et métaux rares, qui s’accompagnent parfois de la construction sur site de centrales électriques dédiées, et surtout alimente la production matérielle de nouveaux et nombreux produits (notamment les milliards de dispositifs de l’Internet des objets) qui interpénètrent essentiellement tous les secteurs économiques et fabriquent de nouveaux besoins rapidement devenus nécessités. Au bilan, c’est une

augmentation annuelle de l’ordre de 9% de l’ensemble du domaine du numérique dont il s’agit, qui s’accompagne notamment d’une augmentation de 100% tous les 3,4 mois de l’intensité calculatoire requise pour l’entrainement des réseaux de neurones profonds 15. Face à l’urgence environnementale et à la nécessité par ailleurs d’un repli de la consommation énergétique fossile (bloquée à 85% du mix énergétique global depuis 30 ans en dépit du développement massif des énergies dites renouvelables) à hauteur de -7% par an, ces explosions de consommation liées au numérique et particulièrement à l’IA sont intenables. Dans ce contexte, les usages sociétalement
bien maigres de l’IA sont particulièrement indéfendables.

Le chantier de résilience de la société humaine, qui se devra de retrouver une forme de cosmologie plus animiste 16, décoloniale et solidaire, doit passer par un démantèlement progressif des technologies zombies (insoutenables, coloniales, induisant leurs propres besoins) et l’instauration – ou plus souvent la réinstauration – de technologies vivantes (lowtech, répondant à des besoins réels et aux contraintes de ressources, accessibles à toutes et tous).

Cet article est donc un appel aux chercheur·ses et ingénieur·es en informatique et en mathématique de faire tomber les premiers pans de ce chantier aussi vaste qu’indispensable en décommissionnant les investissements en intelligence artificielle et en recréant une base saine de communs, c’est-à-dire de savoirs et savoir-faire partagés et résilients.

Il s’agit de bifurquer, refuser, parfois désobéir en conscience, mais aussi justifier, expliquer et sensibiliser, autant de compétences précisément au cœur du travail scientifique et qui nous positionne de fait comme les actrices et acteurs les mieux armé·es pour engager une transition enthousiaste et constructive. Sous la pression de nos propres étudiants, les laboratoires de recherche et universités y sont désormais sensibles et déjà prêts pour certains à accueillir, sinon inciter, à une pensée du déraillement pour reprendre le titre de l’appel large d’Aurélien Barrau.

La première pièce du puzzle d’une société résiliente peut passer très concrètement par ce démantèlement organisé des illusions mortifères de l’intelligence artificielle en redonnant toute sa place à notre intelligence animale, sociale et sensible.

[^1]: Voir par exemple Par delà nature et culture
[^2]: L'ontologie naturaliste crée une entité nommée nature, cet ensemble indifférencié d’êtres vivants non humains, mis à la marge car supposée dépourvue de conscience. Dès lors, cette nature-objet insensible peut être puisée, extraite, transformée, détruite. Selon Haudricourt dans son article Domestication des animaux, culture des plantes et traitement d’autrui, cette vision du monde induit l’accès à des formes de violences dont la conséquence est l’ensemble des formes d’assujettissement et de destruction du vivant
[^3]: Voir La convivialité d’Illich
[^4]: À ce titre, voir l’intéressant point de vue de Malcolm Ferdinand dans son livre Une écologie décoloniale
[^5]: L’ensemble minéral des infrastructures routières, bâtiments, machines, véhicules, etc., et des déchets de ces produits, conçus depuis moins d’un siècle, a en effet une masse cumulée équivalente à cinq fois la masse du monde carbone du vivant.
[^6]: Ces considérations ont récemment ouvert un débat houleux sur la distinction entre scientifique et scientifique militant, qu’il conviendrait plus raisonnablement de repenser comme un enjeu de réhabilitation des scientifiques vu·es aujourd’hui comme des produits des surspécialisation et polarisation techniques vers leur statut historique de penseurs philosophes systémiques (tels Pythagore, Aristote, Descartes, ou encore Einstein).
[^7]: Il est d’ailleurs frappant que les développeurs et développeuses de technologies high-tech soient précisément ceux et celles qui utilisent le moins ces mêmes technologies (c’est notamment le cas des nombreux chercheur·ses en télécommunication que j’ai pu côtoyer, moi-même inclus, qui ont longtemps vécu sans smartphone).
[^8]: Ce dernier point est défendu par Dany Robert-Dufour dans son livre Baise ton prochain
[^9]: Il est assez clair chez les mathématiciens qu’un tel espoir de compréhension de la complexité de ces réseaux, basés sur des optimisations hautement non linéaires aux milliards de minima locaux, est absolument hors d’atteinte.
[^10]: En l’occurrence, il ne s’agit ni de vision, ni de langage et, qui plus est, un problème qui demanderait bien trop d’exemples d’apprentissage par des architectures validées par des humains. On oublie bien trop souvent au passage qu’un algorithme d’intelligence artificielle ne fonctionne que par le biais de millions d’exemples étiquetés et alimentés par des humains : la perte de l’expertise technique, ne serait-ce que pour identifier les paramètres pertinents et correctement les étiqueter, se traduit de fait par l’impossibilité mécanique de la mise en place d’un algorithme d’IA.se traduit de fait par l’impossibilité mécanique de la mise en place d’un algorithme d’IA.
[^11]: Considérée comme une non science par les mathématiciens et physiciens.
[^12]: De six millions en 1950 à 400 000 en 2021, avec un objectif gouvernemental sous-entendu de descendre ce chiffre à 200 000, en dépit de l’évaluation du Shift Project dans son Plan de Transformaion de l’Économie Française de la nécessité d’accroître ce nombre de 500 000 nouveaux paysans d’ici à 2030 (Atelier Paysan évalue quant à lui ce chiffre à un million d’agriculteur·rices supplémentaires).

[^14]: Voir par exemple l’article de Patterson et al., Carbon emissions and large neural network training, ou encore àl’échelle du numérique le travail de Freitag et al. The real climate and transformative impact of ICT : A critique of estimates, trends, and regulations. Le rapport grand public Lean ICT du Shift Projet est aussi un point d’entrée très exhaustif.
[^15]: Il y a dix ans, cette croissance était de 100% tous les deux ans.
[^16]: C’est-à-dire qui confère, comme la science l’établit aujourd’hui, une forme d’intériorité subjective (de conscience) aux autres êtres vivants, et qui place au centre des débats géopolitiques l’interdépendance forte entre les espèces (humaine et autres qu’humaine).

Permalink
February 5, 2024 at 8:42:46 PM GMT+1

The end of the social networkhttps://www.economist.com/leaders/2024/02/01/the-end-of-the-social-network

  • Social Network
  • Societal Collapse
  • Big Corpo
  • Social Network
  • Societal Collapse
  • Big Corpo

The end of the social network

The Economist

Editor’s note, February 2nd 2024: This article was updated after Meta announced earnings results for 2023.

Facebook may be turning 20 on February 4th, but it is just as much of a magnet for controversy and cash today as when it was a brash, break-everything teenager. On January 31st Mark Zuckerberg, the social network’s founder, was harangued by American senators over the spread of harmful material. The next day he announced another set of glittering results for Meta, Facebook’s parent company, which is now valued at $1.2trn. Yet even as social media reliably draw vast amounts of attention from addicts and critics alike, they are undergoing a profound but little-noticed transformation.

The weird magic of online social networks was to combine personal interactions with mass communication. Now this amalgam is splitting in two again. Status updates from friends have given way to videos from strangers that resemble a hyperactive tV. Public posting is increasingly migrating to closed groups, rather like email. What Mr Zuckerberg calls the digital “town square” is being rebuilt—and posing problems.

This matters, because social media are how people experience the internet. Facebook itself counts more than 3bn users. Social apps take up nearly half of mobile screen time, which in turn consumes more than a quarter of waking hours. They gobble up 40% more time than they did in 2020, as the world has gone online. As well as being fun, social media are the crucible of online debate and a catapult for political campaigns. In a year when half the world heads to the polls, politicians from Donald Trump to Narendra Modi will be busy online.

The striking feature of the new social media is that they are no longer very social. Inspired by TikTok, apps like Facebook increasingly serve a diet of clips selected by artificial intelligence according to a user’s viewing behaviour, not their social connections. Meanwhile, people are posting less. The share of Americans who say they enjoy documenting their life online has fallen from 40% to 28% since 2020. Debate is moving to closed platforms, such as WhatsApp and Telegram.

The lights have gone out in the town square. Social media have always been opaque, since every feed is different. But TikTok, a Chinese-owned video phenomenon, is a black box to researchers. Twitter, rebranded as X, has published some of its code but tightened access to data about which tweets are seen. Private messaging groups are often fully encrypted.

Some of the consequences of this are welcome. Political campaigners say they have to tone down their messages to win over private groups. A provocative post that attracts “likes” in the X bear pit may alienate the school parents’ WhatsApp group. Posts on messaging apps are ordered chronologically, not by an engagement-maximising algorithm, reducing the incentive to sensationalise. In particular, closed groups may be better for the mental health of teenagers, who struggled when their private lives were dissected in public.

In the hyperactive half of social media, behaviour-based algorithms will bring you posts from beyond your community. Social networks can still act as “echo chambers” of self-reinforcing material. But a feed that takes content from anywhere at least has the potential to spread the best ideas farthest.

Yet this new world of social-media brings its own problems. Messaging apps are largely unmoderated. For small groups, that is good: platforms should no more police direct messages than phone companies should monitor calls. In dictatorships encrypted chats save lives. But Telegram’s groups of 200,000 are more like unregulated broadcasts than conversations. Politicians in India have used WhatsApp to spread lies that would surely have been removed from an open network like Facebook.

As people move to closed groups, the open networks left behind are less useful because of the decline in public posting. During the covid-19 pandemic, scientists and doctors contributed to an online debate which contained real insight as well as misinformation. Open-source intelligence flowed when Russia invaded Ukraine. Today those conversations are disappearing or moving to closed channels, slowing the spread of ideas. The people still weighing in on the public networks, meanwhile, are disproportionately male and likely to describe themselves as very left- or right-wing: bores, in plain English.

What’s more, the open-network algorithms driven by users’ behaviour seem primed to spread the spiciest videos. For something to go viral on a social network, people had to choose to share it. Now they endorse it simply by watching, as the algorithm rewards content that attracts the most engagement. Deliberate curation has been replaced by a system that taps straight into the id. Provocateurs like Mr Trump or Nayib Bukele, the favourite in this week’s election in El Salvador, stand to benefit, as do misinformation merchants. Platforms say they are better at weeding out fakes. Taylor Swift, the latest high-profile victim of a deepfake, might disagree.

More urgent even than the rise of fake news is a lack of the real sort. Mr Zuckerberg once said he wanted Facebook to be like a personalised newspaper. But since the network’s pivot to entertainment, news makes up only 3% of what people see on it. Across social media only 19% of adults share news stories weekly, down from 26% in 2018. Publications like BuzzFeed News, which relied on social distribution, have perished. That is their lookout (and ours). But it is everyone’s problem when nearly half of young people say that, just as the platforms decide news is no longer interesting, social media are their main source of news.

WhatsApp and WhatsDown

Some people argue that social networks’ defects can be fixed by better governance, clever coding or a different business model. Such things can help. But the problems raised by the new generation of apps suggest that social media’s flaws are also the result of the trade-offs built into human communication. When platforms swing back towards private groups, they inevitably have less oversight. When people escape their echo chambers, they may well face more extreme content. When users embrace harmless entertainment, they see less news. As social networks wither, platform operators and users should devote less time to the old battles and more to grappling with the new.

Permalink
February 5, 2024 at 8:27:15 PM GMT+1

Bunkers et fermes bio : comment les ultrariches préparent l’avenirhttps://reporterre.net/Bunkers-et-fermes-bio-comment-les-riches-preparent-l-avenir

  • Big Corpo
  • Tech Guru
  • Ecology
  • Societal Collapse
  • Big Corpo
  • Tech Guru
  • Ecology
  • Societal Collapse

Bunkers et fermes bio : comment les ultrariches préparent l’avenir

Face à la crise climatique, le survivalisme est devenu l’ultime lubie des milliardaires qui planifient leur exode et tentent d’échapper au monde commun.

Vous lisez la première partie de l’enquête Le séparatisme climatique des ultrariches.


Les ultrariches se préparent à la catastrophe qu’ils ont eux-mêmes contribué à provoquer. Alors que le monde plonge peu à peu dans le chaos climatique, ils se bâtissent des bunkers, s’achètent des îles privées et s’aménagent des existences qu’ils rêvent à l’abri des violences, pour continuer à vivre dans le luxe, le calme et la volupté.

Voilà déjà quelques années que les médias ont révélé le phénomène. La pandémie de Covid-19 et la guerre en Ukraine ont tout accéléré. Avec leurs jets privés, leurs yachts et leurs « gated communities » (quartiers résidentiels privés), les milliardaires ont décidé de faire sécession.

La Nouvelle-Zélande et la Patagonie sont devenues leurs repaires. Ils investissent aussi dans des contrées isolées comme l’Alaska, la Scandinavie ou le Kamtchatka en Russie. Steve Huffman, le cofondateur du réseau social Reddit, le dit lui-même : une bonne moitié de ses collègues milliardaires de la Silicon Valley ont acheté des « biens immobiliers apocalyptiques » dans le monde. Les médias anglo-saxons évoquent même « un rush ».

Des jets privés prêts à décoller pour la Nouvelle-Zélande

Le dernier en date à avoir défrayé la chronique est le fondateur de Facebook Mark Zuckerberg. Son bunker de luxe à Hawaï, baptisé « Koolau Ranch », devrait coûter plus de 250 millions d’euros. L’abri de 464 m2, protégé par une porte blindée et un système de vidéosurveillance hors norme, est capable de se fournir lui-même en électricité ainsi qu’en eau grâce à une gigantesque citerne. Une ferme biologique de 27 hectares lui est associée pour pouvoir vivre en autarcie. Au bord du Pacifique, le milliardaire attend la fin du monde en écoutant le bruit des vagues.

À une dizaine de milliers de kilomètres de là, en Nouvelle-Zélande, l’engouement des ultrariches est tel que les autorités sont débordées. Elles ont créé un visa spécial pour les grandes fortunes qui souhaitent s’y implanter. Le visa Investor Plus exige un investissement minimum de 6,7 millions de dollars (6,2 millions d’euros) sur trois ans. Peter Thiel, le cofondateur milliardaire de Paypal, a obtenu la nationalité néo-zélandaise après n’avoir passé que douze jours dans le pays. De nombreuses villas se sont construites sur des terrains en pleine nature, avec des pistes d’atterrissage et des fermes.

Le milliardaire Julian Robertson est propriétaire d’un manoir surplombant le lac Wakapitu. Le président de Fidelity National Financial Inc. Bill Foley possède une propriété dans la région de Wairarapa. James Cameron, le réalisateur de Titanic, a acheté un immense manoir qui domine le lac Pounui.

L’attrait pour cette île s’explique par sa stabilité politique et son éloignement géographique. Pour la rejoindre en cas de crise, cinq dirigeants d’entreprises de la Silicon Valley possèdent un énorme jet privé caché dans un hangar du Nevada. L’avion est prêt à décoller à n’importe quel moment avec leur famille (et celle du pilote !). Son prix est estimé à 61 millions de dollars (56,2 millions d’euros) et sa maintenance à un million de dollars par an, selon [le média Bloomberg](https://www.bloomberg.com/features/2018-rich-new-zealand-doomsday-preppers/?leadSource=uverify wall).

« Accaparement de la terre par les ultrariches »

Les journalistes racontent aussi comment des bunkers de survie de 150 tonnes sont construits en secret au Texas avant d’être acheminés par bateau jusqu’à la Nouvelle-Zélande. Ils y sont enfouis à trois mètres de profondeur, loin des regards et avec la complicité de la douane.

Pour le chercheur en sciences politiques, Édouard Morena, on assiste à une nouvelle forme d’« accaparement de la terre par les ultrariches ». En Écosse, 67 % des terres rurales écossaises appartiennent à 0,0025 % de la population. « Les ultrariches investissent dans de grandes propriétés pour cultiver l’entre-soi et se préserver du reste de la planète mais aussi parce qu’ils estiment que dans un futur proche, ces terres constitueront une formidable richesse », dit-il à Reporterre.

Premiers propriétaires terriens d’Argentine

En Patagonie, la situation est frappante. Ted Turner, le boss de la chaîne de la télévision CNN, s’est offert 5 000 hectares ; Amalita Lacroze de Fortabat, « la reine du ciment » s’est construit une villa en face du lac Nahuel Huapi et a embauché des hommes de main pour bloquer l’accès aux habitants locaux. La pampa s’est hérissée de barbelés.

Le milliardaire Joe Lewis, qui a fait fortune dans les Hard Rock Cafe, a racheté une superficie grande comme Paris et a même détourné le cours d’une rivière. À quelques kilomètres de là, Sylvester Stallone s’est offert un lac et 14 000 hectares à proximité d’El Bolson. George Soros a, quant à lui, acheté un domaine de 350 000 hectares et a même été le premier propriétaire terrien argentin avant d’être détrôné par la famille Benetton qui possèdent aujourd’hui 900 000 hectares du côté d’Esquel.

« Les ultrariches rêvent d’être seuls »

« Les ultrariches rêvent d’être seuls et de se réfugier dans leur château. » Dans un best-seller non traduit, intitulé La survie des plus riches, les fantasmes d’évasion des millionnaires de la tech (notre traduction) l’intellectuel Douglas Rushkoff analyse l’attrait des milliardaires pour l’effondrement. Il raconte dans le Guardian sa rencontre lunaire avec cinq milliardaires technophiles qui lui expliquent leur plan pour faire face à « l’événement » — l’euphémisme qu’ils utilisent pour parler de la crise climatique.

Invité à donner une conférence sur les nouvelles technologies, il se retrouve à répondre à une myriade de questions des milliardaires : combien de temps faut-il prévoir pour survivre sans aide extérieure ? Un abri doit-il avoir sa propre alimentation en air ? Quelle est la probabilité de contamination des eaux souterraines ? Le PDG d’une maison de courtage qui vient de terminer la construction d’un bunker souterrain lui demande aussi comment maintenir l’autorité sur sa force de sécurité.

L’intellectuel, déboussolé, tente de les raisonner et de leur dire de ne pas investir uniquement dans « des munitions » ou « des clôtures électriques ». Mais les milliardaires s’entêtent. Ils envisagent d’utiliser des serrures à combinaison spéciale pour l’approvisionnement en nourriture qu’ils seraient les seuls à connaître. Ou de faire porter aux gardes une sorte de collier disciplinaire en échange de leur survie. Ou encore de construire des robots qui serviraient de gardien et de travailleurs — si cette technologie pouvait être développée « à temps ».

Un tournant libertarien

Pour Douglas Rushkoff, « leur extrême richesse et leurs privilèges les ont rendus obsédés par l’idée de s’isoler du danger réel, écrit-il. Ils veulent s’échapper d’un monde qu’ils ont rendu invivable. »

Ce rêve d’évasion est assez inédit dans l’histoire, même si pendant la guerre froide, les riches se construisaient déjà des bunkers pour survivre à un hiver nucléaire. Dans une interview à L’Obs, le philosophe Bruno Latour estimait que la bascule a eu lieu dans les années 1980 et 1990. C’est à partir de ce moment que les riches auraient suivi « un processus de radicalisation solitaire ».

En 1997, au sein de la Silicon Valley, un manifeste technolibertaire est sur toutes les lèvres : The Sovereign Individual : How to Survive and Thrive During the Collapse of the Welfare State (éd. Simon & Schuster), que l’on peut traduire par L’individu souverain : comment survivre et prospérer durant l’effondrement de l’État-providence, de James Dale Davidson et William Rees-Mogg. Il prédit que la faillite de l’État-nation est inéluctable et que seule une « aristocratie » et « une élite cognitive » survivra. Pour les auteurs, « le développement technologique est devenu moins une histoire d’épanouissement collectif que de survie personnelle ». Le progrès pour tous n’est plus à atteindre. Le projet est de bâtir une forteresse pour une minorité de riches.

« Les membres les plus astucieux des classes dominantes ont alors compris que la globalisation n’était pas soutenable écologiquement, raconte Bruno Latour. Au lieu de changer de modèle économique, ils ont décidé de renoncer à l’idée d’un monde commun. » Autrement dit, « la classe dominante s’est immunisée contre la question écologique en se coupant du monde ».

Si elle a si facilement adhéré aux thèses de l’effondrement, c’est aussi parce que cette idée ne la menaçait pas directement. « L’effondrement est une façon de contourner une critique frontale du capitalisme et des rapports de classe, elle ne cible pas en priorité les riches mais nous invite à tous changer d’attitudes et de pratiques », estime le chercheur Edouard Morena. D’où sa popularité dans les hautes sphères.

L’ère du « misanthropocène »

L’idéologie séparatiste n’a cessé, depuis, de croître. Le sociologue Nicolaj Schultz parle même de « misanthropocène » pour évoquer notre époque. « Les maîtres du monde se préparent à nous abandonner comme des miettes sur une assiette », écrit-il.

On retrouve cette idée d’exode poussée à son paroxysme dans les fantasmes extraplanétaires d’un Elon Musk ou d’un Jeff Bezos qui rêvent de « terratransformer » Mars ou de bâtir des colonies géantes flottant dans le système solaire.

Puisqu’il n’est, pour l’instant, pas très sympathique de vivre dans l’espace, les ultrariches passent de « la planète B » au « plan B ». À défaut de pouvoir coloniser Mars, Mercure ou la Lune, ils s’assurent de posséder sur cette planète un luxueux refuge-bunker résistant aux aléas climatiques. Après moi, le déluge.

Permalink
January 19, 2024 at 9:04:59 PM GMT+1

The Teen Mental Illness Epidemic is International: The Anglospherehttps://www.afterbabel.com/p/international-mental-illness-part-one

  • Social Network
  • Societal Collapse
  • Social Network
  • Societal Collapse

It is now widely accepted that an epidemic of mental illness began among American teens in the early 2010s. What caused it? Many commentators point to events in the USA around that time, such as a particularly horrific school shooting in 2012. But if the epidemic started in many nations at the same time, then such country-specific theories would not work. We’d need to find a global event or trend, and the 2008 Global Financial Crisis doesn’t match the timing at all, as Jean Twenge and I have shown.

In our 2018 book The Coddling of the American Mind, Greg Lukianoff and I presented evidence that the same trends were happening in Canada and the United Kingdom—not just the rise in depression and anxiety, but also the overprotection of children, the rise of “safetyism,” and the shouting down of speakers on university campuses when students deemed the speaker to be “harmful.” It seemed that all the Anglo nations were setting up their children for failure in the same ways at the same time.

In 2019 I went on a speaking tour of Australia and New Zealand (thanks to Think Inc) and discovered that the same trends were coming, more slowly, to the Southern hemisphere Anglo nations too. I then created two new collaborative review documents, one for Australia, and one for New Zealand, to gather all the empirical studies and journalistic accounts I could find.

In early 2020, just as COVID swept in, I hired Zach Rausch as a research assistant to help me handle these collaborative review docs. I connected with Zach, who was finishing his master’s degree in Psychological Science at SUNY New Paltz, after reading his blog post about his research related to The Coddling. Zach and I created many more collaborative review docs for many more countries and topics, which you can find (and add to) on this page.

In 2021, as I was beginning to work on the Babel project, I told Zach that it was urgent that we figure out just how international the mental illness epidemic was. Is it just happening in the five countries of what is sometimes called The Anglosphere? Is it all Western countries? Is it happening everywhere? Go figure it out and report back to me.

The rest of this post is Part 1 of Zach’s report, in his voice. I think his findings are momentous and should cause an immediate global rethinking of what children need to have a healthy childhood, and what obstacles to development arrived around the globe around 2012.

Permalink
January 14, 2024 at 4:30:18 PM GMT+1

The Teen Mental Illness Epidemic is International, Part 2: The Nordic Nationshttps://www.afterbabel.com/p/international-mental-illness-part-two

  • Social Network
  • Societal Collapse
  • Big Corpo
  • Social Network
  • Societal Collapse
  • Big Corpo

Today we have Zach’s second post on how teen mental health is changing around the world. In his last post, Zach showed that there is a four-part pattern in the data from all five of the main “Anglosphere” countries. Today he presents the studies he has collected on teen mental health in the five Nordic nations (the three Scandinavian countries of Sweden, Denmark, and Norway, plus Finland and Iceland).

The Nordic nations differ in many ways from the Anglosphere countries. In particular, they have lower levels of some of the social pathologies that some have said might explain the rising levels of mental illness in the USA such as income inequality, a hyper-competitive neoliberal economy, school shootings, and other gun violence. The Nordic countries also seem to do a lot less of the “coddling” and paranoid overprotection that is rampant in the USA, Canada, and the UK (though not so common down under).

And yet, as you’ll see, the basic pattern largely holds. I think these two posts from Zach are tremendously important for they show us that the teen mental illness epidemic is international. We need to be looking for causes that can explain the international pattern, even as we know there are also additional causes and cultural interactions specific to each country.

Permalink
January 14, 2024 at 4:00:26 PM GMT+1

SOLSTICE - 5 - YouTubehttps://www.youtube.com/watch?v=cntb3wcZdTw

  • Videos
  • Videos

In a race for domination, the Continental Alliance's ambition leads to the uninhibited exploitation of planet Solstice-5. As they construct unstoppable autonomous factories, the Alliance's insatiable quest for power leaves behind a forsaken workforce in a world spiraling out of control.

Permalink
January 9, 2024 at 9:49:23 PM GMT+1

Pluralistic: “If buying isn’t owning, piracy isn’t stealing” (08 Dec 2023) – Pluralistic: Daily links from Cory Doctorowhttps://pluralistic.net/2023/12/08/playstationed/#tyler-james-hill

  • Enshitification
  • Big Corpo
  • Planned Obsolescence
  • Enshitification
  • Big Corpo
  • Planned Obsolescence

Pluralistic: "If buying isn't owning, piracy isn't stealing" (08 Dec 2023)

20 years ago, I got in a (friendly) public spat with Chris Anderson, who was then the editor in chief of Wired. I'd publicly noted my disappointment with glowing Wired reviews of DRM-encumbered digital devices, prompting Anderson to call me unrealistic for expecting the magazine to condemn gadgets for their DRM:

https://longtail.typepad.com/the_long_tail/2004/12/is_drm_evil.html

I replied in public, telling him that he'd misunderstood. This wasn't an issue of ideological purity – it was about good reviewing practice. Wired was telling readers to buy a product because it had features x, y and z, but at any time in the future, without warning, without recourse, the vendor could switch off any of those features:

https://memex.craphound.com/2004/12/29/cory-responds-to-wired-editor-on-drm/

I proposed that all Wired endorsements for DRM-encumbered products should come with this disclaimer:

WARNING: THIS DEVICE’S FEATURES ARE SUBJECT TO REVOCATION WITHOUT NOTICE, ACCORDING TO TERMS SET OUT IN SECRET NEGOTIATIONS. YOUR INVESTMENT IS CONTINGENT ON THE GOODWILL OF THE WORLD’S MOST PARANOID, TECHNOPHOBIC ENTERTAINMENT EXECS. THIS DEVICE AND DEVICES LIKE IT ARE TYPICALLY USED TO CHARGE YOU FOR THINGS YOU USED TO GET FOR FREE — BE SURE TO FACTOR IN THE PRICE OF BUYING ALL YOUR MEDIA OVER AND OVER AGAIN. AT NO TIME IN HISTORY HAS ANY ENTERTAINMENT COMPANY GOTTEN A SWEET DEAL LIKE THIS FROM THE ELECTRONICS PEOPLE, BUT THIS TIME THEY’RE GETTING A TOTAL WALK. HERE, PUT THIS IN YOUR MOUTH, IT’LL MUFFLE YOUR WHIMPERS.

Wired didn't take me up on this suggestion.

But I was right. The ability to change features, prices, and availability of things you've already paid for is a powerful temptation to corporations. Inkjet printers were always a sleazy business, but once these printers got directly connected to the internet, companies like HP started pushing out "security updates" that modified your printer to make it reject the third-party ink you'd paid for:

See also : Ink stained wretches battle soul digital freedom taking place inside your printer

Now, this scam wouldn't work if you could just put things back the way they were before the "update," which is where the DRM comes in. A thicket of IP laws make reverse-engineering DRM-encumbered products into a felony. Combine always-on network access with indiscriminate criminalization of user modification, and the enshittification will follow, as surely as night follows day.

This is the root of all the right to repair shenanigans. Sure, companies withhold access to diagnostic codes and parts, but codes can be extracted and parts can be cloned. The real teeth in blocking repair comes from the law, not the tech. The company that makes McDonald's wildly unreliable McFlurry machines makes a fortune charging franchisees to fix these eternally broken appliances. When a third party threatened this racket by reverse-engineering the DRM that blocked independent repair, they got buried in legal threats:

See also : https://pluralistic.net/2021/04/20/euthanize-rentier-enablers/#cold-war

Everybody loves this racket. In Poland, a team of security researchers at the OhMyHack conference just presented their teardown of the anti-repair features in NEWAG Impuls locomotives. NEWAG boobytrapped their trains to try and detect if they've been independently serviced, and to respond to any unauthorized repairs by bricking themselves:

See also : https://mamot.fr/@q3k@hackerspace.pl/111528162905209453

Poland is part of the EU, meaning that they are required to uphold the provisions of the 2001 EU Copyright Directive, including Article 6, which bans this kind of reverse-engineering. The researchers are planning to present their work again at the Chaos Communications Congress in Hamburg this month – Germany is also a party to the EUCD. The threat to researchers from presenting this work is real – but so is the threat to conferences that host them:

See also : https://www.cnet.com/tech/services-and-software/researchers-face-legal-threats-over-sdmi-hack/

20 years ago, Chris Anderson told me that it was unrealistic to expect tech companies to refuse demands for DRM from the entertainment companies whose media they hoped to play. My argument – then and now – was that any tech company that sells you a gadget that can have its features revoked is defrauding you. You're paying for x, y and z – and if they are contractually required to remove x and y on demand, they are selling you something that you can't rely on, without making that clear to you.

But it's worse than that. When a tech company designs a device for remote, irreversible, nonconsensual downgrades, they invite both external and internal parties to demand those downgrades. Like Pavel Chekov says, a phaser on the bridge in Act I is going to go off by Act III. Selling a product that can be remotely, irreversibly, nonconsensually downgraded inevitably results in the worst person at the product-planning meeting proposing to do so. The fact that there are no penalties for doing so makes it impossible for the better people in that meeting to win the ensuing argument, leading to the moral injury of seeing a product you care about reduced to a pile of shit:

See also : https://pluralistic.net/2023/11/25/moral-injury/#enshittification

But even if everyone at that table is a swell egg who wouldn't dream of enshittifying the product, the existence of a remote, irreversible, nonconsensual downgrade feature makes the product vulnerable to external actors who will demand that it be used. Back in 2022, Adobe informed its customers that it had lost its deal to include Pantone colors in Photoshop, Illustrator and other "software as a service" packages. As a result, users would now have to start paying a monthly fee to see their own, completed images. Fail to pay the fee and all the Pantone-coded pixels in your artwork would just show up as black:

See also : https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process

Adobe blamed this on Pantone, and there was lots of speculation about what had happened. Had Pantone jacked up its price to Adobe, so Adobe passed the price on to its users in the hopes of embarrassing Pantone? Who knows? Who can know? That's the point: you invested in Photoshop, you spent money and time creating images with it, but you have no way to know whether or how you'll be able to access those images in the future. Those terms can change at any time, and if you don't like it, you can go fuck yourself.

These companies are all run by CEOs who got their MBAs at Darth Vader University, where the first lesson is "I have altered the deal, pray I don't alter it further." Adobe chose to design its software so it would be vulnerable to this kind of demand, and then its customers paid for that choice. Sure, Pantone are dicks, but this is Adobe's fault. They stuck a KICK ME sign to your back, and Pantone obliged.

This keeps happening and it's gonna keep happening. Last week, Playstation owners who'd bought (or "bought") Warner TV shows got messages telling them that Warner had walked away from its deal to sell videos through the Playstation store, and so all the videos they'd paid for were going to be deleted forever. They wouldn't even get refunds (to be clear, refunds would also be bullshit – when I was a bookseller, I didn't get to break into your house and steal the books I'd sold you, not even if I left some cash on your kitchen table).

Sure, Warner is an unbelievably shitty company run by the single most guillotineable executive in all of Southern California, the loathsome David Zaslav, who oversaw the merger of Warner with Discovery. Zaslav is the creep who figured out that he could make more money cancelling completed movies and TV shows and taking a tax writeoff than he stood to make by releasing them:

See also : https://aftermath.site/there-is-no-piracy-without-ownership

Imagine putting years of your life into making a program – showing up on set at 5AM and leaving your kids to get their own breakfast, performing stunts that could maim or kill you, working 16-hour days during the acute phase of the covid pandemic and driving home in the night, only to have this absolute turd of a man delete the program before anyone could see it, forever, to get a minor tax advantage. Talk about moral injury!

But without Sony's complicity in designing a remote, irreversible, nonconsensual downgrade feature into the Playstation, Zaslav's war on art and creative workers would be limited to material that hadn't been released yet. Thanks to Sony's awful choices, David Zaslav can break into your house, steal your movies – and he doesn't even have to leave a twenty on your kitchen table.

The point here – the point I made 20 years ago to Chris Anderson – is that this is the foreseeable, inevitable result of designing devices for remote, irreversible, nonconsensual downgrades. Anyone who was paying attention should have figured that out in the GW Bush administration. Anyone who does this today? Absolute flaming garbage.

Sure, Zaslav deserves to be staked out over an anthill and slathered in high-fructose corn syrup. But save the next anthill for the Sony exec who shipped a product that would let Zaslav come into your home and rob you. That piece of shit knew what they were doing and they did it anyway. Fuck them. Sideways. With a brick.

Meanwhile, the studios keep making the case for stealing movies rather than paying for them. As Tyler James Hill wrote: "If buying isn't owning, piracy isn't stealing":

See also : https://bsky.app/profile/tylerjameshill.bsky.social/post/3kflw2lvam42n

Permalink
December 10, 2023 at 4:06:07 PM GMT+1

Médias : les premières expériences 100 % IA | Les Echoshttps://www.lesechos.fr/tech-medias/medias/ces-sites-dinformations-deja-ecrits-par-des-ia-2038642

  • Artificial intelligence
  • Enshitification
  • Press
  • Artificial intelligence
  • Enshitification
  • Press

Médias : les premières expériences 100 % IA

Certains sites expérimentent l'utilisation de robots conversationnels pour écrire tout leur contenu et se revendiquent comme tels. A l'inverse, d'autres le cachent, provoquant de l'inquiétude.

Par Marina Alcaraz

Publié le 1 déc. 2023 à 13:43Mis à jour le 6 déc. 2023 à 17:59

Dans la présentation du site, il y a leurs parcours, leurs centres d'intérêt, leurs photos… Ils ont un style bien à eux, et des couvertures de sujets spécifiques. Carine Gravée, Vianney Garet, Nina Gavetière ont des noms, mais ne sont pas réels : ce sont des journalistes-robots créés de toutes pièces.

Certains sites réalisés entièrement par l'intelligence artificielle (IA) commencent à apparaître. Certes, pour l'heure, les initiatives restent limitées. Mais certains tentent de créer une niche, d'expérimenter un nouveau mode de création des contenus, en le revendiquant clairement… ou pas.

Magazine en kiosque

C'est par exemple le cas du magazine (papier et Web) « LHC - Les Heures Claires », qui se présente comme le premier magazine français généré à 99 % par l'IA, lancé il y a quelques semaines. Comme un support classique, il comporte des actualités, des interviews (avec des questions posées par un robot), des pages consacrées à la mode ou un horoscope.

A la manoeuvre, Rémy Rostan, ancien photographe. « Je suis toujours assez surpris par ce que propose ChatGPT », avoue-t-il. Le magazine sponsorisé par Easy Partner, cabinet de recrutement spécialisé dans le numérique, a vocation à être lancé en kiosque au printemps avec une fréquence mensuelle. « Je vise la communauté des technophiles et des curieux », explique Rémy Rostan, qui espère atteindre les 20.000 ventes.

Autres exemples : Tech Generation et Cuisine Generation, lancés au printemps par un consultant spécialisé en innovation chez Viseo, Ari Kouts. Il a connecté le site tech avec différents journaux spécialisés dans ce secteur, comme TechCrunch. Chaque « journaliste » fictif (qui a son style bien à lui) reprend donc des articles de presse sur des sujets d'actualité (la crise chez OpenAI, des déclarations de Musk…), les réécrit en donnant la source et ce, sans aucune intervention humaine. Au final, quelques incohérences, des maladresses mais des articles qui ressemblent à certains billets de blogs.

Dans la cuisine, les « chefs » imaginent plusieurs recettes « et bon nombre sont plausibles et même bonnes, même si les temps de cuisson sont approximatifs », estime Ari Kouts. C'est plus à titre d'expérience que le consultant a lancé ces « médias », sans volonté de les monétiser. « Cela permet aussi de rappeler l'intérêt de l'analyse, de l'enquête journalistique que des robots ne peuvent pas faire », assure-t-il.

Les deux sites ont une petite audience (autour de 3.000 visites par mois) et ressortent quelquefois dans Google News ! Même si la probabilité est faible, dans ce cas, puisqu'il s'agit d'une expérimentation un peu comme un jeu, « les sources primaires pourraient empêcher ce type de pratiques en invoquant le parasitisme, c'est-à-dire s'approprier la valeur d'un article », indique Julien Guinot-Deléry, avocat chez Gide.

Craintes des professionnels

Mais il existe aussi des sites dont le mode de production a été passé sous silence. « Dans un groupe de travail de la Commission paritaire des publications et agences de presse, la crainte qu'il y ait des sites avec une forte composante d'IA a été évoquée », dit un professionnel. « On a tous ce risque en tête », appuie Pierre Pétillault, directeur de l'Alliance de la presse d'information générale.

Dans une étude récente, Newsguard a identifié foule de sites avec des articles réécrits avec l'IA (presque 600 à fin novembre !), sans supervision humaine. Et dans nombre de cas, ils bénéficient de publicité programmatique. Aux Etats-Unis, « Sports Illustrated » ou « TheStreet » (Arena Group) ont été pointés par une enquête du média Futurism. Des articles auraient été écrits par des IA et de faux profils de journalistes créés (avec les images achetées sur un site proposant des photos générées par IA), ce qui a provoqué la colère des journalistes. Le groupe de médias s'est défendu, indiquant avoir acheté certains papiers à une agence.

Permalink
December 10, 2023 at 3:51:38 PM GMT+1

Arthur Keller : « Notre civilisation convertit la nature en déchets »https://lareleveetlapeste.fr/arthur-keller-notre-civilisation-est-une-machine-qui-convertit-la-nature-en-dechets/

  • Mass Consumption
  • Economy and markets
  • Societal Collapse
  • Ecology
  • Mass Consumption
  • Economy and markets
  • Societal Collapse
  • Ecology

Arthur Keller : « Notre civilisation est une machine qui convertit la nature en déchets »

"On nous persuade que le bonheur découlerait d’une somme de petits plaisirs et que le plaisir proviendrait de la consommation et de l’accumulation, alors qu’il n’est pas de vrai bonheur sans la fierté de se battre pour les valeurs qu’on sait bonnes et sans la sagesse de l’autolimitation."

Texte: Laurie Debove

Arthur Keller est un spécialiste des risques systémiques qui pèsent sur les sociétés humaines, des vulnérabilités de ces dernières face à ces risques, et des stratégies de transformation collective, dans une optique de résilience. Alors que l’humanité dépasse de plus en plus de limites planétaires, nous l’avons interrogé pour savoir comment transformer nos sociétés afin qu’elles deviennent plus justes et pérennes.

Pour Arthur Keller, notre planète est atteinte d’un cancer généralisé

Arthur Keller : « J’utilise souvent une analogie que j’ai développée pour illustrer l’erreur méthodologique fondamentale que nous commettons dans notre approche générale des enjeux liés au dépassement écologique planétaire : la métaphore du cancer généralisé. C’est un dérèglement systémique du corps, qui se traduit en de multiples symptômes : maux de tête, problèmes de peau et de digestion, par exemple. Pour chacun de ces symptômes, il existe des remèdes, ça peut être de l’ibuprofène, une pommade, un antispasmodique. Pourtant, la somme de ces « solutions » ne guérit pas la maladie.

Pour chaque crise, des experts préconisent des solutions, et l’on s’imagine que la somme de ces solutions pourrait nous sauver. Hélas le compartimentage en champs d’expertise masque la réalité de la maladie : notre civilisation est une machine qui convertit la nature en déchets.

Ces derniers sont solides, liquides ou gazeux ; parmi les gaz, certains détraquent le climat. Le changement climatique, à l’instar des autres crises, n’est qu’un symptôme de la maladie. Et notre obstination à nous attaquer aux conséquences sans remettre en question les causes explique selon moi notre échec constaté jusqu’ici.

LR&LP : Selon une étude publiée le 31 mai dans la revue Nature, sept des huit limites assurant la stabilité et la bonne santé du système planétaire ont déjà été dépassées. Quelles sont-elles ?

Arthur Keller : Cette étude est intéressante parce qu’elle porte sur le système dynamique complexe qui est constitué du système Terre (lithosphère, cryosphère, atmosphère, biosphère et pédosphère) et de l’anthroposphère (la sphère englobant l’humanité, ses sociétés et ses activités). Dans le cadre des limites planétaires, on n’était que sur le système Terre ; ici l’on incorpore les sciences humaines et sociales, comme dans le concept d’économie du donut de Kate Raworth.

En 2009, une équipe internationale de scientifiques a déterminé 9 seuils à ne pas dépasser pour préserver une planète Terre habitable pour l’humanité. Le seuil de stabilité de la machine climatique a été dépassé, nous sommes donc entrés dans une phase transitoire entre un état climatique stable et un autre, qui n’adviendra probablement pas avant plusieurs milliers ou dizaines de milliers d’années en raison notamment de la cinétique propre à la cryosphère. Jusqu’à nouvel ordre, nous allons donc devoir faire avec un climat fortement instable.

Par ailleurs, une nouvelle extinction de masse est enclenchée, due aux activités humaines, activités agricoles et forestières au premier plan. Les pollutions sont rejetées dans les milieux naturels si vite qu’elles s’accumulent et rendent les habitats naturels toxiques. Les cycles biogéochimiques, grands cycles du vivant, sont déréglés, notamment le phosphore et l’azote, ainsi que le cycle de l’eau.

Une autre limite est proche : celle qui concerne l’acidification des océans. Quant à la limite liée aux aérosols dans l’atmosphère, nous sommes encore en deçà du seuil fatidique. La dernière actuellement identifiée a trait au trou dans la couche d’ozone : c’est l’unique domaine dans lequel la situation progresse dans la bonne direction. Au final, l’humanité a d’ores et déjà franchi 6 des 9 limites qu’il ne faut pas franchir, selon les scientifiques, afin de garantir l’habitabilité de la Terre.

Dans l‘étude parue fin mai, il ne s’agit pas tout à fait des mêmes 9 limites mais d’un sous-ensemble. Ces 8 limites-là, définies comme « sûres et justes », intègrent à la fois des données scientifiques et des métriques sociales. Et ce que dit la publication, c’est que 7 de ces 8 limites ont déjà été dépassées.

Même sans alarmisme, la situation est alarmante.

LR&LP : Certains appellent à « Changer de civilisation », est-ce possible et la solution ?\

Arthur Keller : C’est indispensable ! Hélas ce n’est pas pour cela que c’est possible. Je ne crois malheureusement pas une seconde à notre capacité collective à organiser la métamorphose nécessaire. Le système s’auto-entretient, je suis convaincu qu’il le fera jusqu’à s’autodétruire, entraînant avec lui une grande partie de la biosphère.

On ne peut pas durablement prélever plus de ressources que ce que le système Terre peut produire, de même on ne peut continuer de détruire plus vite que la capacité d’autoréparation. C’est pour cela qu’on va vivre, dans les prochaines années et décennies, une grande descente énergétique et matérielle agrémentée de raréfactions et de pénuries conflictogènes.

Cette descente induira forcément une contraction économique, car la croissance de l’économie est fortement corrélée aux flux d’énergie et de matières premières. C’est inéluctable. Et échouer à planifier et piloter cette contraction sous forme de politiques de décroissance nous condamnera à la subir dans le chaos sous la forme d’un délitement socio-économique désastreux et potentiellement permanent. Bien avant 2050.

Il n’existe aucun moyen d’opérer un découplage absolu entre le Produit Intérieur Brut et la pression anthropique globale – ceux qui prétendent le contraire n’ont pas saisi la systémique des enjeux ou sont des lobbyistes au service d’une idéologie ou d’intérêts financiers. Dans tous les cas, leurs propos sont en déphasage avec les données et les connaissances scientifiques.

Il faudrait donc, en effet, changer en profondeur les règles de l’économie mondiale et nos modèles de sociétés, mais le système repose sur des ordres établis si solidement ancrés qu’il n’est pas possible, j’en ai peur, de le changer véritablement.

On peut en limiter la nocivité par la voie de la mobilisation politique et citoyenne, par la révolte et la Résistance (sous le radar ou frontales) : l’idéal serait que les diverses modalités de lutte et les différentes radicalités se comprennent comme des alliées mutuelles et se synergisent enfin.

En parallèle, il faut poser les fondations d’un système alternatif, même si l’on ne dispose pas de tous les moyens qu’il faudrait : à travers des expérimentations et des démonstrateurs territoriaux, il est temps de prouver et de faire savoir, via des récits inspirants, qu’il existe d’autres façons d’organiser les économies locales, de prendre les décisions collectivement, d’assurer aux gens un minimum vital, de développer des activités régénératives, de travailler sur le découplage entre qualité de vie et quantité énergétique et matérielle, de réaménager des espaces pour un vivre-ensemble salubre.

Il est possible de redonner du sens, de nourrir une culture du risque, de la responsabilité et de la solidarité, de créer de la cohésion, d’insuffler la conviction d’une appartenance à une même communauté de destin.

Le grand système extrêmement puissant va se casser la gueule, le technosolutionnisme atteint ses limites, les « solutions » sur lesquelles nous misons l’avenir et les grands projets que les privilégiés persistent à nourrir (conquête spatiale, IA, impérialisme, etc.) vont également buter sur la descente énergétique et matérielle. Il faut anticiper des bascules sociétales et s’y préparer collectivement, en coopération intra- et inter-territoriales, intra- et inter-communautaires, sans tomber dans le piège de l’entre-soi.

Un changement de civilisation à travers les territoires

LR&LP : Comment préparer un territoire ?

Arthur Keller : Cela fait appel à des principes, leviers, outils que j’enseigne à des élus et agents territoriaux, ou à des citoyens dans le cadre de formations et d’ateliers. Même si à ce jour il n’existe pas, à ma connaissance, de collectivité pleinement résiliente, il existe tout de même des initiatives vraiment intéressantes, des démarches volontaristes qui vont dans le bon sens.

Qu’il s’agisse de collectifs participant au mouvement des Villes en transition, de certains oasis Colibris, de dynamiques territoriales comme le projet TERA, de monnaies locales et de systèmes d’échanges locaux, de réserves communales de sécurité civile, il se passe des choses intéressantes et inspirantes, on ne part pas de zéro et c’est rassurant !

Une partie de ces projets territoriaux s’attaquent à un point clé : comment produire l’essentiel de façon pérenne, en tenant compte des évolutions écologiques (climat, cycle de l’eau, biodiversité, etc.) et de possibles ruptures d’approvisionnement critiques, en ne comptant que sur ce dont on peut disposer dans un rayon géographique maîtrisable.

En matière de production alimentaire, on a la chance inouïe d’avoir des approches qui cochent toutes les cases : l’agroécologie, l’agroforesterie et la permaculture réunies, avec des pratiques sur sol vivant, ont le potentiel de nourrir le monde entier de façon saine, variée, nutritive et savoureuse tout en régénérant les écosystèmes.

Des monnaies locales et des circuits courts locaux sont aussi créés, reliant les acteurs et habitants des territoires. Des expérimentations sociales aussi, portant par exemple sur un revenu minimum d’existence ou un revenu minimum universel, sur une dotation inconditionnelle d’autonomie ou une sécurité sociale alimentaire comme à Montpellier et Strasbourg, sont en cours. Ainsi que de multiples initiatives de démocratie délibérative et participative…

Les gens peuvent et doivent se réapproprier la gestion des Communs. Les collectivités ont la capacité de garantir l’accès à tous au minimum vital : à Montpellier, les premiers mètres cubes d’eau sont désormais gratuits. Il serait intéressant d’étendre ce principe à tout ce qui est nécessaire pour vivre dignement.

La question des récits est en vogue et pour une bonne raison, car il n’y aura pas de massification des changements de comportements sans l’émergence de nouveaux imaginaires. Plus encore que des œuvres de fiction, il est capital de transposer les transformations concrètes réussies en récits vivides et inspirants afin d’alimenter une dynamique culturelle.

LR&LP : Les mentalités sont-elles prêtes pour expérimenter et mettre en place tout ça ?

Arthur Keller : La promesse de plus de confort et moins de travail proposée de longue date par les promoteurs de l’innovation technique n’a pas été tenue. Aujourd’hui, ce même système nous explique qu’il faut travailler toujours plus pour se maintenir sur place. Le « progrès » s’est mué en un marathon qui enrôle de force, assaillit de dissonances cognitives et aboutit en général à un sentiment amer d’inassouvissement.

Ceux qui proposent la semaine de 4 jours sont traités de fous, comme le furent avant eux les défenseurs de la journée de 12 heures, puis de 10, puis de 8, puis les partisans du repos dominical, puis ceux des congés payés – deux, trois, quatre puis cinq semaines ! – puis ceux de la semaine de 35 heures.

Chaque progrès social se heurte aux chantres du productivisme forcené.

Les robots, les objets connectés et l’IA nous envahissent mais ne s’accompagnent pas d’une libération puisque rien n’est entrepris pour que les gens puissent vivre bien sans emploi ; au contraire, les postures idiotes prolifèrent qui assimilent le non-emploi à de la fainéantise et le fainéant à un sous-citoyen qui ne mériterait pas la fraternité de l’État-providence.

Les habitants des pays riches sont saturés de choix de consommation qui leur sont présentés comme un summum de liberté alors que cette surenchère aliène en créant de l’addiction – c’est-à-dire une dépendance, l’exact opposé d’une liberté –, une insatiabilité croissante et de la frustration : plus t’en as, plus t’en veux… jusqu’à la perte de sens totale.

Cette course folle ne rend pas seulement fou mais aussi malheureux. Au-delà d’un certain niveau de vie, il y a un point de rebroussement du bonheur ressenti.

On nous persuade que le bonheur découlerait d’une somme de petits plaisirs et que le plaisir proviendrait de la consommation et de l’accumulation, alors qu’il n’est pas de vrai bonheur sans la fierté de se battre pour les valeurs qu’on sait bonnes et sans la sagesse de l’autolimitation.

Il me semble d’ailleurs primordial de réapprendre la notion de liberté : une somme d’individualismes sans limites ne débouche jamais sur une société libre mais sur une dystopie dans laquelle quelques riches peuvent tout et la majorité presque rien.

La liberté de chacun découle d’une géométrie démocratiquement négociée des droits et des devoirs : quand des limites énergétiques et matérielles, biophysiques et écologiques, s’imposent aux sociétés humaines, ne pas les retranscrire rationnellement en libertés et en interdits et ne pas les décliner en codes sociaux et culturels ne peut qu’entraîner la dislocation des sociétés.

Ceux qui refusent de nouvelles limitations par principe, sans s’intéresser aux bouleversements à l’œuvre dont découle l’impératif desdites limitations, n’œuvrent pas au nom de la liberté ainsi qu’ils le prétendent mais dans l’espoir égoïste de maintenir leurs privilèges. Le monde change vite ; si nous voulons préserver ce qui compte, il nous faut vite changer aussi. » conclut Arthur Keller

Permalink
December 3, 2023 at 5:41:07 PM GMT+1

What is Corporate Memphis and Why is it Everywhere? – t-art magazinehttps://t-artmagazine.com/what-is-corporate-memphis-and-why-is-it-everywhere/

  • Advertising
  • Ads Madness
  • Attention Economy
  • Big Corpo
  • Advertising
  • Ads Madness
  • Attention Economy
  • Big Corpo

What is Corporate Memphis and Why is it Everywhere?

Sriya Choppara·April 5, 2021

If you’ve ever stepped foot in an Apple store, you’ve probably noticed the large, softly glowing graphic panels spanning every wall. You may recall Yukai Du’s vibrant and flat illustration from your last visit. Its candy-like color palette and the sharp silhouettes drawn with unusually rosy skin might have caught your eye. The art style used by Du, commonly seen while scrolling through Facebook or viewing the Google Doodle of the day, is not unfamiliar to you. This illustration style has been dubbed the corporate art style, or more pejoratively, ‘Corporate Memphis’.

Characterized by solid coloring, geometric shapes and cartoonish figures with disproportionately lanky limbs, Corporate Memphis has become increasingly ubiquitous since it was first popularized in 2017. While it debuted as a space-filler in social media, websites and SaaS (software as a service) products, the style can now be found as images accompanying website articles and on magazine covers. The style is well received by the majority of consumers and has become the go-to choice of illustration style in marketing and user interfaces. However, its omnipresence has spurred criticism for its apparent shift away from more intricate, detailed illustrations to what critics deem to be generic and uninspired.

The exact origins of Corporate Memphis are unclear. Some credit its creation to Alice Lee, an independent illustrator who partnered with Slack to design 40 illustrations for their website and platform. She has remarked that she was influenced by Mary Blair’s Disney concept art in the process of creating these warm and whimsical figures. Others credit BUCK, the design firm that created the style guide, illustration, and animation system called Alegria for Facebook’s ecosystem. Corporate Memphis is also said to be the illustrative analogy to the 80’s post-modernist design movement Memphis. Corporate Memphis’ geometric forms, along with its highly contrasted and bold color style are similar to that of Memphis furniture designs. Additionally, the way figures are warped in Corporate Memphis is reminiscent of how Memphis designers depict everyday objects in unusual ways.

Considering past trends in graphic design, the art style’s current ubiquity was inevitable. Take your phone’s interface, for instance. With every update, app icons have taken one step forward to a minimalistic, simplified, and flat look. The days of skeuomorphism are long gone. Big Tech companies have increased their demand for Corporate Memphis illustrations. Additionally, the progression to a minimalistic finish is evident in how several corporations have revamped and decluttered their previous busy and detailed logos.

There are several reasons why there is such mass appeal for simplified design, and why the Corporate Memphis style has flourished in particular. For one, the illustrations of people are often inclusive to the extent that some resemble anthropomorphic aliens. But no matter their appearance, people can see themselves in illustrated figures with skin as purple as a plum. The cheery color palettes and the characters reminiscent of childhood cartoons only add to their charm. The simple vector art has just the right amount of detail to please the eye while one swiftly skims through social media or lengthy policy updates.

Corporate Memphis is flamboyant. It’s spirited. It’s joyful. It’s most certainly prepossessing to the general public. But it’s exactly these elements that warrant greater scrutiny. Illustrations in the style, with its aggressively friendly expressions, portray a world that is uncannily utopian. The deliberate oversimplification in Corporate Memphis can give a false sense of security in situations where the opposite behavior should be observed. While it may not be obvious, the bubbly style can make consumers subconsciously lower their guard and lure them in. For instance, Corporate Memphis is often strategically placed in predatory subscription terms or privacy agreements. This smart marketing ploy for firms may have negative repercussions for users.

Other benefits that corporations receive from using Corporate Memphis are far less sinister. It is convenient for graphic designers to work in the style using vector art which makes illustrations easily scalable, animated, and replicated. Graphic designers are able to quickly alter designs according to the whims of their clients. The generic appearance also makes it simple for startups on a budget to use premade assets to accompany a wide variety of texts. Its homogeneous appearance lends to a sleek and professional look at a small price point. Not to mention, graphic designers don’t need a particularly high level of skill to create rudimentary assets in the style.

That isn’t to say that it’s impossible for illustrations in the Corporate Memphis style to be unique. While Corporate Memphis has very distinctive features that make art of its genre appear similar, it is wrong to say that it lacks creativity. Take Slack versus Facebook’s illustrations, for instance. The fluidity of lines, color palettes, proportions, textures, digital brush choice and methods of shading make a world of difference.

Editorial illustrations with elements of the style are evidence of how artists have put their own spin on it. Illustrator Jing Wei’s characters have consistently thick limbs, a rich color palette, and distinctive eye shape. Her experience in printmaking is clearly an influence on her commercial art. Illustrator Alex Eben Meyer uses colors so bright they seem to pop out of the screen and overlays shapes to mimic paper collages. While it is more likely that these illustrators drew with the intention of echoing Art Deco or Minimalism, the solid coloring and jointless limbs parallel Corporate Memphis’ key attributes.

So what is the future of Corporate Memphis? Despite criticism that it is generic, or even lazy, the decision of several companies to invest in and use it in their visual branding is a sign of its appreciation by the majority of its audience. While its appeal may come with questionable implications, it is also hard to argue against its radical inclusivity and how it allows many to easily project onto the cartoon-like figures. It is important to consider it a warning sign when it accompanies consequential texts like terms and conditions, but I appreciate its style and love to see how different artists put their own spin on it. In the light of its pervasiveness, I am keen to believe many are fond of it too. Whether its influence is seen in accompaniments to editorials or fillers to bedeck empty space, Corporate Memphis’ sweeping popularity will not disappear anytime soon.

Permalink
December 3, 2023 at 5:18:39 PM GMT+1

Résoudre les biais n’est pas si simple ! | Hubert Guillaudhttps://hubertguillaud.wordpress.com/2023/11/28/resoudre-les-biais-nest-pas-si-simple/

  • Algorithms
  • Ads Madness
  • Algorithms
  • Ads Madness

Résoudre les biais n’est pas si simple !

Pour AlgorithmWatch, le journaliste John Albert (@jaalbrt) a enquêté sur la façon dont Meta vient de proposer de résoudre un de ses biais algorithmique récurrent, à savoir le fait que la publicité ciblée désavantage certaines catégories de publics en ne proposant pas certains produits à certains publics, par exemple, des offres immobilières à certains publics de couleurs selon leur localisation, des offres d’emplois de camionneurs à des femmes… Un biais documenté depuis longtemps et qui a donné lieu à des poursuites des autorités américaines, dès 2019 par le ministère du logement américain, qui avait conduit Facebook a débrancher certaines catégories du ciblage publicitaire, comme l’âge, le genre, le sexe ou les caractéristiques éthniques ou religieuses de certains types de publicités, notamment celles consacrées au logement, à l’emploi et au crédit. En juin 2022, Facebook avait réglé ce différend en acceptant de payer une amende (115 000$) et en s’engageant à développer un système pour débiaiser les publicités immobilières.

Car la disparition des catégories de ciblage de la population ne suffit pas : l’algorithme publicitaire de Facebook est capable de comprendre que les femmes cliquent moins sur les publicités proposant un emploi de camionneur et donc ne pas le leur proposer. Il reproduit et amplifie ainsi les biais existants en désavantageant certains publics déjà marginalisés. Or, cette discrimnation automatisée est illégale puisque le droit interdit la discrimination, notamment sur des critères protégés comme la race, l’âge ou le sexe. La difficulté, bien sûr, c’est que cette discrimination est particulièrement invisible, puisque les gens qui pourraient contester l’inéquité de la diffusion publicitaire n’y ont absolument pas eu accès, puisqu’ils ne voient même pas les pubs qu’ils auraient du voir !

Pour corriger les biais, il faut pouvoir les mesurer !

Suite à ce règlement, Meta a donc créé et déployé un nouvel algorithme, un “système de réduction de la variance” (VRS) pour tenter de limiter les biais de ses algorithmes d’apprentissage automatique dans son système de ciblage et de diffusion publicitaire. “Au lieu d’optimiser les clics, le VRS est conçu pour optimiser la “précision” égale de la diffusion des publicités sur les publics cibles éligibles. Une fois qu’une publicité est vue par suffisamment d’utilisateurs, le VRS mesure l’âge global, le sexe et la “répartition estimée de la race ou de l’origine ethnique” de ceux qui ont déjà vu la publicité et les compare avec le public éligible plus large qui aurait pu potentiellement voir la publicité, puis ajuste ensuite la diffusion des annonces en conséquence”. En gros, Meta corrige ses biais en regardant ses biais de diffusion. Super ! me direz-vous ! Sauf que ce n’est pas si simple car les biais de diffusion dépendent justement de la diffusion et des moyens pour la mesurer et la corriger, et cela varie beaucoup d’une catégorie d’annonce l’autre, d’un territoire où elle est diffusée l’autre.

Un rapport de conformité produit par le cabinet Guidehouse affirme que le VRS de Meta fonctionne comme prévu. Mais les lacunes du rapport rendent ces affirmations difficiles à vérifier, estime John Albert pour AlgorithmWatch. En fait, les publicités discriminatoires continuent de sévir sur Facebook dans des domaines autres que le logement, ainsi que dans d’autres pays que les Etats-Unis comme les pays européens, où il est peu probable que le VRS soit mis en œuvre, notamment parce que le système de correction n’est pas si simple à mettre en oeuvre, on va le voir.

Pour Daniel Kahn Gillmor, défenseur de la vie privée et technologue à l’ACLU, ce rapport très technique montre que Guidehouse n’a pas eu accès aux données de Meta et n’a accompli son travail que depuis des résultats fournis par Meta. Pour le chercheur Muhammad Ali, un des auteurs de l’étude de 2019 qui avait conduit à dénoncer le problème du ciblage publicitaire discriminatoire sur Facebook, il y a un vrai effort derrière le VRS, mais la correction est particulièrement complexe et particulièrement limitée puisqu’elle ne s’appliquera qu’aux annonces de logement. Si Meta a annoncé vouloir l’appliquer aux annonces portant sur l’emploi et le crédit, il est peu probable qu’il s’étende à d’autres catégories et au-delà des Etats-Unis, car le VRS pour appliquer ses corrections sur les annonces immobilières doit mobiliser des données spécifiques, en l’occurrence les données du recensement américain afin de calculer l’origine ethnique des audiences publicitaires selon leur localisation pour les corriger ! Pour chaque correction des publics, il faut pouvoir disposer de correctifs ! Un système de ce type ne pourrait pas fonctionner en Inde par exemple, où il n’y a pas de données géolocalisées sur les castes par exemple, ou en France où l’on ne dispose pas de données ethniques. Corriger les biais nécessite donc de fournir les systèmes en données générales, par exemple des données sur l’origine ethniques des populations qui ne sont pas disponibles partout. Sans compter que ces corrections qui visent à réduire les écarts de performance entre groupes démographiques risquent surtout de produire un “égalitarisme strict par défaut”, qu’une forme d’équité.

Enfin, il y a d’innombrables autres biais dans les catégories de publics que ces systèmes produisent à la volée, par exemple quand une personne est identifiée comme s’intéressant aux jeux d’argent et de paris en ligne et qui va être sur-sollicitée sur ces questions, au risque de renforcer ses dépendances plutôt que de le protéger de ses vulnérabilités (ou d’autres vulnérabilités, comme le montrait The Markup, quand ils analysaient les catégories de la plateforme publicitaire Xandr – j’en parlais là), et l’amplification des biais risque d’être encore plus forte et plus difficile à corriger quand les publics cibles sont particulièrement spécifiques. Sans compter que finalement, c’est le but du ciblage publicitaire de produire des catégorisations et donc de la discrimination : ainsi quand on cherche à montrer une annonce à des cibles ayant tel niveau de revenu, c’est bien à l’exclusion de tous les autres (pour autant que ce ciblage fonctionne, ce qui est bien plus rarement le cas qu’énoncé, comme je l’évoquais en observant les troubles du profilage).

Selon la loi européenne sur les services numériques, les plateformes ne sont plus autorisées à cibler des publicités en utilisant des catégories de données “sensibles”, comme la race, le sexe, la religion ou l’orientation sexuelle et doivent atténuer les risques systémiques découlant de leurs services. Reste que l’approche extrêmement fragmentaire de Meta pour atténuer les biais, tels que le propose le VRS, risque d’être difficile à appliquer partout, faute de données permettant de corriger les biais disponibles. Le débiaisage pose la question de quelles corrections appliquer, comment les rendre visibles et jusqu’où corriger ?

Plutôt que des rapports, protéger les chercheurs !

Cette analyse sur comment Meta peine à débiaiser a d’autres vertus que de nous montrer les limites intrinsèques du débiaisage. Elle permet également de constater que la réponse réglementaire n’agit pas au bon niveau. En effet, si le DSA européen prévoit que les plateformes se soumettent à des audits indépendants – à l’image de ce que vient de produire Guidehouse pour Meta en contrôlant comment les plateformes se conforment à l’obligation d’identifier et d’atténuer les risques et les discriminations -, le risque est fort que ce contrôle produise beaucoup d’audit-washing, les plateformes définissant les normes de leurs contrôles (et pour l’instant chacune le fait différemment), choisissant leurs auditeurs et produisant les données qu’elles souhaitent partager. Pour l’instant, nous sommes confrontés à des mesures d’audit mal définies qui risquent surtout d’être mal exécutées, expliquent dans un rapport sur le sujet pour le German Marshall Fund, les spécialistes Ellen Goodman et Julia Trehu en rappelant les règles de l’audit et en pointant l’exemplarité du cadre d’audit définit par la cour des comptes hollandaise lors de l’inspection de plusieurs algorithmes utilisés par les services publics. Les spécialistes insistent sur la nécessité de fournir un cadre, des normes, des critères de certification établies par un régulateur pour garantir la qualité des audits.

Or, comme l’explique très pertinemment John Albert, les rapports de conformité ne remplacent pas un examen externe. A terme, nous risquons surtout d’être inondés de rapports de conformités tous plus complexes les uns que les autres, à l’image [des rapports sur la transparence des plateformes que vient de recueillir la commission européenne](https://digital-strategy.ec.europa.eu/en/news/very-large-online-platforms-and-search-engines-publish-first-transparency-reports-under-dsa#:~:text=The transparency reports must include,of orders they receive from) qui ne nous apprennent pas grande chose et ne semblent pas un levier pour changer les choses (on a appris seulement le nombre de modérateurs officiels des plateformes, comme l’explique le Monde, ou encore les catégories de modérations… mais, comme le souligne TechPolicyPress, aucun rapport n’évoque d’autres questions inscrites au DSA européen, comme des informations sur comment les plateformes interdisent la publicité aux enfants, comment elles comptent faciliter la possibilité de se désinscrire des systèmes de recommandation, ou quelles solutions de partages de données avec la recherche ou les autorités comptent-elles mettre en place. Aucune non plus n’a mis en place d’audit des risques systémiques). Comme l’explique la journaliste Gaby Miller dans un autre article de TechPolicyPress, la commission européenne devrait envisager d’expliciter les lignes directrices des informations qu’elle attend de ces rapports, afin qu’ils puissent être plus comparables entre eux.

Le caractère très limité des informations fournies par les plateformes ne suffiront pas. “La neutralité supposée des plateformes dépend de notre capacité à les surveiller”, disions-nous. Il faut pouvoir se doter d’audits contradictoires, véritablement indépendants, capables de tester les systèmes algorithmiques des plateformes, de mesurer et de réagir aux problèmes. Pour cela, il faut que ces grandes entreprises permettent aux chercheurs de chercher, de collecter des données, de recruter des utilisateurs pour produire des mesures indépendantes et ce n’est pas ce à quoi nous assistons actuellement, où les plateformes se ferment bien plus à la recherche qu’elles ne s’y ouvrent ! Pour cela, il faudrait que les autorités offrent des protections aux chercheurs, insiste John Albert : “Tant que nous n’aurons pas mis en place des protections légales pour les chercheurs d’intérêt public et que nous n’aurons pas sécurisé les voies d’accès aux données des plateformes, nous serons obligés de nous occuper des auto-évaluations et des rapports d’audit des plateformes. Un flot de ces rapports est en route. Mais ces mesures ne remplacent pas une recherche véritablement indépendante, essentielle pour tenir les plateformes responsables des risques que leurs services font peser sur la société.”

Hubert Guillaud

Permalink
December 1, 2023 at 8:40:55 PM GMT+1

"Bientôt la figuration, ça n'existera plus" : des comédiens "scannés" sur des tournages de film craignent pour leur futurhttps://www.francetvinfo.fr/culture/cinema/bientot-la-figuration-ca-n-existera-plus-des-comediens-scannes-sur-des-tournages-de-film-craignent-pour-leur-futur_6132291.html

  • Artificial intelligence
  • Deepfake
  • Art
  • Artificial intelligence
  • Deepfake
  • Art

"Bientôt la figuration, ça n'existera plus" : des comédiens "scannés" sur des tournages de film craignent pour leur futur

Article rédigé par Marion Bothorel Publié le 25/10/2023 05:55

Un fond vert, une myriade de clichés... La duplication 3D se répand de plus en plus sur les plateaux de cinéma. Des comédiens s'inquiètent ainsi d'être "utilisés sans le savoir" et de participer au crépuscule de leur profession, déjà menacée par l'avènement de l'intelligence artificielle.

Il est près de 2 heures du matin en cette fin août. Accoutré en bourgeois du XIXe siècle, Lucien participe au tournage du Comte de Monte-Cristo*, la prochaine superproduction de Pathé, tirée de l'œuvre d'Alexandre Dumas. Après plus de quatre heures de tournage, le groupe de figurants dont fait il fait partie peut faire une pause. Le comédien, somnolent, est approché par un photographe. Habitué des séances en costume, Lucien se prête au jeu. Cette fois, il est tenu d'afficher une mine neutre, devant un écran vert.

"Des stickers avaient été collés, il me les indiquait en me disant : 'Regarde ce point-là'. Il m'a aussi demandé de lever les bras."

Lucien, figurant

Ces poses sont suffisamment inédites pour pousser Lucien à questionner le photographe : "Il me répond que c'est pour faire des doubles numériques, pour les effets spéciaux. Je demande si c'est bien pour ce film. Il m'assure que oui." Mais Lucien craint d'être "utilisé sans le savoir" et que sa "copie 3D" se retrouve dans d'autres films. Selon lui, une dizaine d'autres figurants se sont prêtés à l'exercice, sans avoir été informés de "l'utilisation véritable de ces images".

"Suivez-nous au scan !"

Astrid raconte avoir vécu la même scène sur le tournage d'un biopic du général de Gaulle, également produit par Pathé. Après une journée de travail de quatorze heures sous la pluie, les décors commencent à être démontés quand les figurants sont informés qu'il leur reste des "choses à faire". On leur désigne "une petite tente blanche avec un appareil photo, derrière lequel est tendu un écran vert", raconte l'actrice. D'après elle, les responsables sur place "faisaient très attention à ce que tout le monde y passe".*

La comédienne consent mais s'étonne d'être photographiée debout, les bras écartés à l'horizontale. "Au sol, il y avait une croix et on devait pivoter autour à 360°, le visage fixe, les pieds écartés", observe cette ex-graphiste en reconversion.

"Quand on demandait à quoi ça allait servir, les chargés de figuration nous répondaient que c'était pour créer une plus grosse foule. Mais il fallait aller les voir et leur demander."

Astrid, actrice

L'actrice a ensuite exigé que ces images soient effacées. "Je me disais : 'Maintenant qu'ils m'ont créée en 3D, ils vont pouvoir me mettre absolument partout'", explique-t-elle. Près de deux mois après le tournage, elle n'a toujours pas reçu de garantie de la production. Pathé confirme que des scans ont bien été réalisés lors des tournages de De Gaulle et du Comte de Monte-Cristo afin "de faire de la multiplication de foule", sans préciser combien de figurants ont ainsi été numérisés.

Sur une autre production, Olivier a lui aussi été "scanné" sans en avoir été informé au préalable. Pour les besoins d'une série diffusée par une plateforme américaine, il est convoqué, en septembre 2022, à un "essayage d'époque". Il doit être habillé, maquillé et coiffé dans les conditions requises pour le tournage. "Ils m'ont dit : 'Suivez-nous au scan'. Quatre ou cinq figurants attendaient déjà dans cette salle plongée dans le noir. Deux techniciens américains nous ont ensuite placés à tour de rôle sur une croix et 250 appareils photos nous ont flashé simultanément, bras baissés, puis levés pendant 30 secondes, avant qu'on soit remerciés", se souvient-il. Sur le moment, Olivier n'a rien dit, mais avec une année de recul, il juge l'absence de transparence *"pr*oblématique".

"Il n'y a aucune communication"

L'absence de "transparence", c'est également ce qui frappe Nathalie de Médrano, membre de l'Association des chargés de figuration et de distribution artistique (ACFDA). Cette professionnelle dont le travail consiste à recruter des figurants assure avoir été contactée dès le "mois de juin" par des "figurants qui avaient été scannés". En quatre mois, l'ACFDA a récolté une douzaine de témoignages similaires à ceux de Lucien, Astrid et Olivier. *"*Ce qui me frappe le plus dans cette histoire, c'est qu'il n'y a aucune communication de la part des productions. Elles présentent cela comme quelque chose d'acquis, de normal et de naturel", poursuit-elle.

La production du Comte de Monte-Christo a justement utilisé cet argument pour répondre à Lucien, qui demandait la suppression des images. "La prise de photographies devant un fond vert, tel que cela a été fait avec vous, est un procédé de VFX [effets spéciaux] très usuel dans la préparation et le tournage de films", lui expose l'un des producteurs dans un e-mail que franceinfo a pu consulter. "Ces photographies sont effectuées dans l'unique but de créer des effets visuels pour augmenter les effets de foules en arrière-plan des scènes du film (...)*, dans lesquelles aucun visage n'est utilisé ni reconnaissable à l'écran."*

"Cela fait des années que ce procédé est utilisé."

Un producteur

"*Il y a beaucoup de films où ça se fait"*, confirme Antoine Moulineau, superviseur des effets visuels, dont la société Light intervient notamment sur le prochain Napoléon de Ridley Scott. Lui-même utilise cette technique "au moins depuis 1999". En capturant les silhouettes de 300 figurants, la société d'Antoine Moulineau "peut en faire 50 000", assure-t-il. Ce spécialiste des effets spéciaux confirme, en revanche, qu'il est possible que ces doublures numériques puissent servir dans d'autres films, comme le redoutent les figurants interrogés par franceinfo. Dans ce cas, les acteurs auraient beaucoup de mal à se reconnaître à l'écran, selon lui, car les visages sont peu identifiables et les vêtements sont "échangés" d'une silhouette à l'autre afin "d'apporter de la variation" dans la foule.

Un membre de la production chez Pathé admet "qu'il faut [être] plus transparents sur la manière dont sont utilisées et stockées ces images et prouver qu'elles serviront uniquement dans la séquence à laquelle les figurants ont participé, qu'elles ne seront pas réutilisées autrement". Antoine Moulineau tient à rassurer les figurants : "Jamais, il n'a été question de faire une doublure d'un acteur à partir de ces photos-là [prises devant un fond vert] pour lui faire jouer n'importe quoi. On n'en est quasiment pas capables aujourd'hui."

"C'est une manière de faire des économies"

Le milieu du cinéma s'inquiète néanmoins de la généralisation de ces pratiques. Elles ont même été au cœur de la grève des scénaristes et acteurs américains à Hollywood. Le SAG-Aftra, le syndicat de ces derniers, s'est opposé mi-juillet à une proposition faite par les producteurs. D'après Duncan Crabtree-Ireland, son directeur exécutif, cité par le magazine People, ceux-ci voulaient que "les figurants puissent être scannés, qu'ils soient payés pour la journée, puis que leur image appartienne aux sociétés de production et qu'elles puissent l'utiliser pour toujours pour n'importe quel projet, sans consentement et sans compensation". De fait, Astrid a touché la même somme que pour une journée de tournage classique : 180 euros net. "C'est une manière pour [les producteurs] de faire des économies", abonde Olivier.

"Pour la scène où ils m'ont scanné, ils avaient besoin de 3 000 figurants. Alors soit ils en embauchent autant, soient ils me paient double ou triple."

Olivier, comédien

Sans intervention des syndicats, ces figurants restent silencieux, de peur de "se cramer". Mais au-delà de la rémunération, se pose également la question légale du traitement de l'image des figurants, qui entre dans la catégorie "des données sensibles", analyse Mathilde Croze. Cette avocate spécialisée dans les nouvelles technologies rappelle que les données à caractère personnel doivent être "traitées de façon proportionnelle" par les producteurs. "Pendant combien de temps ces images sont-elles stockées ? Pour quelles finalités, où et comment ?" s'interroge-t-elle. Et de critiquer "une méconnaissance totale du droit". Rien ne répond à ces questions dans les contrats de figurants consultés par franceinfo.

"Tout le monde navigue en eaux troubles. Personne ne sait vraiment à quoi vont servir [ces images]. Mais au cas où, les productions les ont en stock."

Mathilde Croze, avocate

Les figurants sont tenus de signer des autorisations d'exploitation de leur image, y compris pour "tous modes et procédés connus ou inconnus à ce jour", selon la formule consacrée. "Tout le monde reconnaît que c'est une question qui doit être traitée, réglementée", s'émeut Jimmy Shuman, conseiller national du Syndicat français des artistes interprètes, affilié à la CGT. Lui se mobilise pour que les figurants puissent "ajouter une ligne dans leur contrat afin d'éviter une utilisation de leur image au-delà de leur rôle dans tel ou tel film".

"On aura toujours besoin de figurants"

De son côté, Pathé assure réfléchir "à comment mieux formaliser les choses pour qu'il n'y ait plus de doutes" quant à la finalité des images et ce, dès l'embauche du figurant "en amont du tournage". Après avoir participé à plusieurs piquets de grève à Los Angeles, aux côtés de ses homologues du SAG-Aftra, Jimmy Shuman invoque une urgence à agir, en évoquant pêle-mêle les figurants virtuels et les "deepfakes" d'acteurs générés par l'intelligence artificielle.

"*Bientôt la figuration sur les sujets d'époque, ça n'existera plus", s'attriste Astrid. Nathalie de Médrano se dit elle aussi "très pessimiste sur l'avenir de la figuration". "Dans dix ans, il y aura peut-être 10% des cachets qu'on a aujourd'hui"*, envisage la chargée de figuration.

"A ce rythme-là, dans cinq ans, il y aura beaucoup moins de figurants, il n'y aura que des doubles numériques hyper réalistes."

Lucien, comédien

"Ce n'est pas du tout une évidence de réduire le nombre de figurants", martèle-t-on chez Pathé, en niant le côté "systématique" de cette pratique. "On aura toujours besoin des figurants", assure également Antoine Moulineau, ne serait-ce que pour avoir une bonne qualité d'image sur les visages placés au premier plan d'une foule. "Si on demande juste à un figurant de marcher en arrière-plan, là oui il peut être généré numériquement", nuance toutefois le superviseur des effets visuels.

Antoine Moulineau se montre en revanche bien plus préoccupé, comme les figurants interrogés, par l'arrivée de l'intelligence artificielle dans le cinéma. Déjà menaçante pour le monde du doublage, cette technologie fragilise davantage les figurants. Recréer numériquement un acteur est déjà possible mais pour l'instant, le recours à l'IA coûte "plus cher" que "faire jouer" un vrai comédien, selon le spécialiste des effets spéciaux. Deux échéances pourraient être décisives. Les négociations à Hollywood, où les acteurs restent mobilisés, pourraient déboucher sur un accord avec les producteurs, qui servirait de modèle en France. D'ici à la fin de l'année, le Parlement européen doit aussi réglementer l'usage de l'intelligence artificielle en Europe, notamment au cinéma.

* Les prénoms ont été modifiés.

Permalink
October 26, 2023 at 9:30:15 PM GMT+2

“Menace contre l’État” : le Kenya s’attaque au scan de l’iris pratiqué par Worldcoinhttps://www.courrierinternational.com/article/cryptomonnaie-menace-contre-l-etat-le-kenya-s-attaque-au-scan-de-l-iris-pratique-par-worldcoin

  • Crypto Currency
  • Privacy Rights
  • Spying
  • global spying
  • Blockchain
  • Crypto Currency
  • Privacy Rights
  • Spying
  • global spying
  • Blockchain

“Menace contre l’État” : le Kenya s’attaque au scan de l’iris pratiqué par Worldcoin

Le projet lancé par Sam Altman, l’un des fondateurs d’OpenAI, veut promouvoir une nouvelle cryptomonnaie sécurisée par des données biométriques, à savoir le dessin de l’iris. Au Kenya, elle aurait scanné l’iris de 350 000 personnes en échange d’une somme d’argent en monnaie virtuelle.

Publié le 12 octobre 2023 à 16h32 Lecture 1 min.

Début août encore, le quotidien kényan The Standard publiait des photos des longues files d’attente qui se formaient à Nairobi. De nombreux habitants s’étaient rassemblés en différents endroits de la capitale pour faire scanner leur iris en échange d’une somme d’argent en cryptomonnaie. L’opération était l’œuvre de Worldcoin, fondé en 2019 par Sam Altman, cofondateur d’OpenAI. Son objectif est de lancer une nouvelle monnaie virtuelle associée à un système d’identification biométrique anonyme.

Mais ce projet semble peu apprécié des autorités locales. Après que le gouvernement a ordonné, le 2 août, la suspension provisoire des scans effectués par Worldcoin, des parlementaires kényans ont publié un rapport, fin septembre, réclamant l’arrêt définitif de ses activités, explique le journal britannique The Daily Telegraph. Ils invoquent une “menace contre l’État” et accusent Worldcoin d’“espionnage”.

Ces derniers émettent des doutes sur la manière dont ces données biométriques sont stockées et craignent de les voir échangées illégalement contre de l’argent. Le rapport parlementaire attire aussi l’attention sur le risque que représente l’émergence d’une monnaie décentralisée pour le système financier du pays.

“Nouvelle fièvre mondiale”

The Daily Telegraph précise que plusieurs millions de personnes à travers le monde ont déjà accepté de passer devant le scanner de Worldcoin.

“L’appareil, qui a la forme d’un ballon de football, scanne l’iris des individus pour confirmer leur identité et leur créer un compte.”

Au Kenya, les personnes ayant participé à l’opération ont reçu en récompense 25 jetons non fongibles de la nouvelle cryptomonnaie worldcoin, qu’ils pouvaient ensuite échanger en monnaie physique. La valeur de ces 25 jetons se situe aujourd’hui autour de 40 euros.

Worldcoin a été accusé de tirer profit des conditions de vie précaires de populations pauvres pour mettre en place son projet. “Confrontés qu’ils sont à un coût de la vie très élevé, à un chômage important et à des salaires qui ne bougent pas, les Kényans ont bondi sur cette occasion de gagner de l’argent sans rien faire, grâce au projet Worldcoin, qui a déclenché une nouvelle fièvre mondiale”, écrit The Standard.

Selon les informations du journal kényan Nation, qui cite les travaux du comité national chargé de la cybersécurité, plus de 350 000 Kényans auraient fait scanner leur iris.

Permalink
October 14, 2023 at 3:11:30 PM GMT+2

23andMe says private user data is up for sale after being scraped | Ars Technicahttps://arstechnica.com/security/2023/10/private-23andme-user-data-is-up-for-sale-after-online-scraping-spree/

  • Datafication
  • Genetics
  • Spying
  • Privacy Rights
  • Datafication
  • Genetics
  • Spying
  • Privacy Rights

23andMe says private user data is up for sale after being scraped

Records reportedly belong to millions of users who opted in to a relative-search feature.

Dan Goodin - 10/7/2023, 1:58 AM

Genetic profiling service 23andMe has commenced an investigation after private user data was scraped off its website

Friday’s confirmation comes five days after an unknown entity took to an online crime forum to advertise the sale of private information for millions of 23andMe users. The forum posts claimed that the stolen data included origin estimation, phenotype, health information, photos, and identification data. The posts claimed that 23andMe’s CEO was aware the company had been “hacked” two months earlier and never revealed the incident. In a statement emailed after this post went live, a 23andMe representative said that "nothing they have posted publicly indicates they actually have any 'health information.' These are all unsubstantiated claims at this point."

23andMe officials on Friday confirmed that private data for some of its users is, in fact, up for sale. The cause of the leak, the officials said, is data scraping, a technique that essentially reassembles large amounts of data by systematically extracting smaller amounts of information available to individual users of a service. Attackers gained unauthorized access to the individual 23andMe accounts, all of which had been configured by the user to opt in to a DNA relative feature that allows them to find potential relatives.

In a statement, the officials wrote:

We do not have any indication at this time that there has been a data security incident within our systems. Rather, the preliminary results of this investigation suggest that the login credentials used in these access attempts may have been gathered by a threat actor from data leaked during incidents involving other online platforms where users have recycled login credentials.

We believe that the threat actor may have then, in violation of our terms of service, accessed 23andme.com accounts without authorization and obtained information from those accounts. We are taking this issue seriously and will continue our investigation to confirm these preliminary results.

The DNA relative feature allows users who opt in to view basic profile information of others who also allow their profiles to be visible to DNA Relative participants, a spokesperson said. If the DNA of one opting-in user matches another, each gets to access the other’s ancestry information.

The crime forum post claimed the attackers obtained “13M pieces of data.” 23andMe officials have provided no details about the leaked information available online, the number of users it belongs to, or where it’s being made available. On Friday, The Record and Bleeping Computer reported that one leaked database contained information for 1 million users who were of Ashkenazi heritage, all of whom had opted in to the DNA relative service. The Record said a second database included 300,000 users who were of Chinese heritage who also had opted in.

The data included profile and account ID numbers, display names, gender, birth year, maternal and paternal haplogroups, ancestral heritage results, and data on whether or not each user has opted in to 23andme’s health data. Some of this data is included only when users choose to share it.

The Record also reported that the 23andMe website allows people who know the profile ID of a user to view that user’s profile photo, name, birth year, and location. The 23andMe representative said that "anyone with a 23andMe account who has opted into DNA Relatives can view basic profile information of any other account who has also explicitly opted into making their profile visible to other DNA Relative participants."

By now, it has become clear that storing genetic information online carries risks. In 2018, MyHeritage revealed that email addresses and hashed passwords for more than 92 million users had been stolen through a breach of its network that occurred seven months earlier.

That same year, law enforcement officials in California said they used a different genealogy site to track down a long-sought suspect in a string of grisly murders that occurred 40 years earlier. Investigators matched DNA left at a crime scene with the suspect’s DNA. The suspect had never submitted a sample to the service, which is known as GEDMatch. Instead, the match was made with a GEDMatch user related to the suspect.

While there are benefits to storing genetic information online so people can trace their heritage and track down relatives, there are clear privacy threats. Even if a user chooses a strong password and uses two-factor authentication as 23andMe has long urged, their data can still be swept up in scraping incidents like the one recently confirmed. The only sure way to protect it from online theft is to not store it there in the first place.

This post has been updated to include details 23andMe provided.

Permalink
October 10, 2023 at 8:30:20 PM GMT+2

Supprimer les VPN, lever l'anonymat sur Internet... Pourquoi nos politiques proposent-ils des idées irréalistes ?https://www.01net.com/actualites/supprimer-les-vpn-lever-lanonymat-sur-internet-pourquoi-nos-politiques-proposent-ils-des-idees-irrealistes.html

  • global spying
  • Politics
  • Police State
  • Spying
  • Privacy Rights
  • global spying
  • Politics
  • Police State
  • Spying
  • Privacy Rights

Supprimer les VPN, lever l’anonymat sur Internet… Pourquoi nos politiques proposent-ils des idées irréalistes ?

1 octobre 2023 à 08:21 par Stéphanie Bascou

Pourquoi des députés ont-ils proposé de supprimer les VPN ou de lever l’anonymat à l’occasion du projet de loi SREN ? Outre les méconnaissances technique et juridique déplorées par des spécialistes, ces deux idées seraient un savant mix de plusieurs éléments : l’absence d’auditions d’experts, une culture du compromis inexistante, la volonté de se faire un nom… le tout mettant en danger nos libertés fondamentales.

« C’était le concours Lépine de l’idée la plus clivante ». Ces dernières semaines, les propositions destinées à « mettre fin au Far West du numérique » se sont multipliées à l’occasion du projet de loi SREN (visant à sécuriser et à réguler l’espace numérique) en discussion à l’Assemblée nationale. Face à plusieurs cas tragiques de suicides d’adolescents victimes de cyberharcèlements, certains députés ont proposé, dans des amendements au projet de loi, des mesures chocs comme lever l’anonymat qui régnerait sur le Web ou supprimer les VPN… Ces mesures, irréalisables techniquement ou dangereuses du point de vue de nos droits fondamentaux, ont suscité la levée de boucliers de défenseurs des libertés fondamentales. Des informaticiens ont pris soin de déconstruire une à une ces propositions, déplorant des idées déconnectées des réalités, comme chez nos confrères de France 3.

« Quand on dit que ce qui est interdit dans le monde physique doit l’être dans le numérique. C’est trop simple, trop lapidaire et pas nuancé », a martelé en commission le député MoDem Philippe Latombe, mardi 19 septembre. Et si la majorité de ces idées a finalement été retirée des amendements du projet de loi, la question demeure : pourquoi les Parlementaires sont-ils allés vers le « trop simple » et techniquement irréalisable, voire peu souhaitable ?

Le monde de l’Internet libre contre le monde de l’Internet régulé

Premier constat : ce n’est pas la première fois que ce type de mesures — aux objectifs plus que louables, mais décorrélés de la façon dont fonctionne le Web — finissent sur la place publique. Oui, il faut tout faire pour mettre fin au cyberharcèlement, et oui, il faut protéger davantage les mineurs. Mais le problème est que « les députés (sont) toujours aussi nuls sur les lois numériques », regrette ce blogueur anonyme, le 19 septembre dernier.

Lors de la loi Avia (2020) ou la loi Hadopi (2009), des débats similaires ont eu lieu. « À chaque fois que vous avez une nouvelle loi sur Internet, il y a deux mondes qui s’affrontent : le monde d’un Internet libre et celui d’un Internet régulé », commente Eric Barbry, avocat associé du Cabinet Racine. « Entre les deux, il y a ceux qui essayent des voies pour réguler le tout dans des conditions satisfaisantes. Mais vous ne pouvez pas empêcher une frange de la classe politique de vouloir aller le plus loin possible et donc amener vers l’interdiction de l’anonymat » en ligne, ajoute l’avocat spécialisé en droit du numérique.

Le rapporteur du projet de loi et député Renaissance Paul Midy a ainsi défendu l’idée d’associer un compte d’un réseau social à une identité numérique, prônant « la fin de l’anonymat au profit du pseudonymat ». Le système fonctionnerait sur le même principe qu’une plaque d’immatriculation. Pourtant, l’anonymat n’existe pas sur le Web : les enquêteurs parviennent toujours à retrouver les auteurs de cyberharcèlement ou de menace de mort, même si cela prend souvent bien trop de temps.

Le député Renaissance Mounir Belhamiti a, de son côté, défendu l’idée de supprimer les VPN alors qu’ils sont très utilisés, par les policiers, les journalistes, les ingénieurs en cybersécurité ou les entreprises. Dans certains pays, les VPN sont un moyen de contourner la censure sur le Web. Face au tollé, ce parlementaire a finalement rétropédalé. D’autres mesures contenues dans le projet de loi, comme le bannissement des cyberharceleurs des réseaux sociaux, se heurtent aussi à la faisabilité technique, car cela reviendrait à obliger les plateformes à surveiller activement le fait que tel internaute ne se recrée pas de compte. Mais alors, pourquoi avoir émis de telles idées ?

« On se croit expert du numérique parce qu’on a un compte TikTok »

« La majorité des gens, nos politiciens sont dans ce cas-là, se croient plus ou moins experts de l’usage des outils numériques parce qu’ils s’en servent, ou parce qu’ils ont un compte TikTok », souligne Benjamin Bayart, militant pour les droits numériques et cofondateur de la Quadrature du Net. Mais « cela ne veut pas dire qu’ils ont compris comment ça marche, ni ses effets sur la société. Quand on change les outils avec lesquels les humains communiquent, on change la façon dont la société se fabrique », ajoute-t-il. « Sans parler du fait qu’en plus, techniquement, ils ne comprennent rien à la manière dont fonctionnent les ordinateurs. C’est ce qui fait qu’ils se disent “on va interdire les VPN” sans avoir la moindre idée de ce à quoi ça touche », poursuit-il.

« La plupart des experts de ces questions avec qui je me suis entretenu m’ont tous dit qu’ils n’avaient pas été auditionnés (par les députés, ndlr), ou qu’ils n’avaient jamais fait partie des débats. Donc je me demande en fait sur qui on s’est appuyé pour émettre ce genre de propositions », s’interroge Tariq Krim, ancien co-président du Conseil du numérique, entrepreneur et initiateur du mouvement Slow Web.

Les députés ne sont pas censés être spécialistes de tout. Mais lorsqu’ils doivent prendre des décisions, en particulier dans des domaines qu’ils ne maîtrisent pas, il est généralement attendu qu’ils s’appuient sur des experts du sujet en question. Mais même quand ces auditions ont lieu, le bât blesse. « Les parlementaires devraient s’adresser aux chercheurs qui travaillent sur ces sujets depuis des années. Ils devraient éviter tous ceux qui ont des choses à vendre, comme la première startup qui passe et qui dit “regardez j’ai la solution qui va super bien marcher dans le Web3, je crois qu’elle va permettre d’identifier tout le monde” », tacle Laurent Chemla, informaticien et cofondateur de Gandi. Même son de cloche pour l’avocat en droit du numérique Alexandre Archambault, qui décrit un législateur qui « n’auditionne que les proches ou les gens qui sont d’accord avec lui ».

« La France peut dicter des choses à Copains d’avant et Doctissimo, mais c’est tout »

Les juristes semblent aussi avoir manqué à l’appel, puisque les députés sont repartis d’une page blanche, alors qu’ils étaient censés, avec ce projet de loi, transposer un règlement européen, le DSA (« Digital Services Act » ou règlement européen sur les services numériques). Ils auraient dû se cantonner à des dispositions à la marge, comme nommer les autorités nationales de contrôle. Imposer davantage d’obligations aux réseaux sociaux ou aux plateformes du Web, « cela relève (désormais, depuis le DSA) soit exclusivement de la Commission européenne, soit du pays d’établissement (le pays dans lequel une entreprise a son siège européen, souvent l’Irlande pour les géants du numérique) – donc la France peut dicter des choses à Copains d’avant et Doctissimo, mais c’est tout », résume Maître Alexandre Archambault. Ces deux plateformes ont leur siège social en France.

Cela ne veut pas dire que le « DSA soit gravé dans le marbre. La Commission européenne dit simplement : si on veut améliorer la protection des mineurs ou mieux lutter contre les contenus haineux ou la désinformation, mettons-nous autour d’une table et voyons comment améliorer les choses – mais au niveau européen. Mais si chaque pays le fait en ordre dispersé, chacun dans son coin – comme le fait la France – c’est intenable », ajoute-t-il. C’est le sens « *d*es observations très sévères de la Commission européenne du 2 août dernier, qui visent directement le projet de loi SREN ». L’auteur de ces observations, Thierry Breton, le commissaire européen au Marché intérieur , dit en substance : « vous ne pouvez pas contraindre les entreprises du Web à davantage d’obligations que ce que le droit européen prévoit. Ce n’est ni fait ni à faire, ça participe à la fragmentation du droit de l’Union, on s’y opposera. Et là, vous avez nos parlementaires qui n’en tiennent pas compte et qui sont dans une fuite en avant », poursuit Maître Archambault, déplorant « une nouvelle loi de circonstances, dictée par l’émotion ».

« La seule façon qu’ils ont d’exister, c’est de faire le pitre »

Mais alors, pourquoi avoir prôné ces mesures de suppression des VPN ou d’anonymat ? Chez la majorité des experts que nous avons interrogée, la même explication est donnée. Les Parlementaires se livreraient à « un concours Lépine des mesures les plus clivantes » parce que ce serait, pour eux, une façon de se faire un nom, regrette Maître Archambault. « La seule manière qu’ils ont d’exister, c’est de faire le pitre. En France, le pouvoir législatif est extrêmement faible, on l’a vu avec la réforme des retraites. Et donc les députés font n’importe quoi pour exister », abonde Benjamin Bayart. « D’autant qu’à l’Assemblée nationale, on ne cherche pas à trouver des compromis, comme c’est le cas au Parlement européen, où pour qu’un texte passe, il faut qu’on trouve un consensus entre plusieurs groupes. Ce qui veut dire que toutes les solutions un peu trop aberrantes vont être écartées, ce qui n’est pas le cas en France », ajoute-t-il.

Résultat, le rapporteur de la loi, Paul Midy, était peu connu jusqu’alors. Mais avec sa sortie médiatique sur la levée d’anonymat, il est passé dans tous les médias. Et cela a eu une conséquence : l’idée d’avoir à s’identifier avec une pièce d’identité pour utiliser les réseaux sociaux, comme celle de supprimer les VPN, ont été largement partagées. Et elles pourraient finir par infuser dans la société, s’alarme Laurent Chemla.

Pour le militant des libertés numériques, « on essaie de pousser des idées qu’on sait irréalisables pour amener petit à petit dans l’esprit du public l’idée qu’il faut effectivement réguler la parole publique, réguler les réseaux sociaux, empêcher les gens d’avoir une liberté d’expression totale. À force de répéter, depuis plus de 27 ans, qu’Internet est une zone de non-droit, on arrive à faire passer dans l’esprit du public que oui, c’est vrai, il y a un problème avec Internet, que ce n’est pas normal que n’importe qui puisse dire n’importe quoi. Donc il y a cette volonté de faire évoluer l’opinion publique vers plus de contrôle de la parole des citoyens, et pour ça, on va proposer des choses qu’on sait irréalistes, qui petit à petit amènent à ce type de propositions », analyse-t-il.

Car avec la technologie, il y a désormais une possibilité de traçage qui n’existait pas jusqu’alors, reconnaît Pierre Beyssac, porte-parole du Parti Pirate et cofondateur de Gandi. « Lorsqu’on établit une connexion réseau ou mobile, il faut savoir où est l’utilisateur. Il est donc tentant pour la police, qui connaît ce fonctionnement technique, d’exploiter cette technologie pour lutter contre le crime et la délinquance ». Mais ce n’est pas parce que cette possibilité existe qu’il faut l’utiliser.

Car « si on y réfléchit, cela reviendrait à vouloir rendre impossible tout ce qui pourrait être illégal sur internet », abonde Laurent Chemla. Or, « dans la vie réelle, ça n’est pas impossible de commettre des délits, ou des crimes, c’est puni a posteriori, mais ça n’est pas rendu impossible a priori, sinon on aurait une caméra derrière la tête de tous les citoyens en permanence », souligne l’informaticien. « Sur Internet, on a l’impression que toutes ces mesures (contrôle d’identité des internautes, suppression des VPN) ont un objectif : rendre tous les crimes, tous les délits, toutes les dérives impossibles a priori, et ça, c’est délirant, on ne peut pas faire ça », estime-t-il.

« On se dit, puisqu’on peut surveiller tout le monde, on surveille tout le monde »

Et si beaucoup comprennent l’argumentaire des policiers qui disent : « “si on ne nous laisse pas surveiller la population et accéder à toutes les données numériques auxquelles on peut accéder en surveillant les gens, ça rend les enquêtes plus compliquées”, il faut rappeler que dans la vie réelle aussi, les enquêtes sont compliquées », insiste Laurent Chemla. « J’avais regardé les chiffres sur les vols de bagnoles, il y a 6 % de résolution et 94 % de vols qui restent impunis. Et pourtant, on ne cherche pas à empêcher complètement les vols de bagnole en mettant des caméras dans tous les garages. Mais évidemment, vu que c’est numérique, on se dit, puisqu’on peut surveiller tout le monde, on surveille tout le monde », souligne-t-il.

On serait passé de « tout le monde est innocent jusqu’à preuve du contraire » à « tout le monde est coupable jusqu’à preuve du contraire », regrette Benjamin Bayart, au sujet par exemple de l’accès des mineurs aux sites pornographiques. « Au “il ne faut pas que les mineurs accèdent à du porno”, la seule réponse qu’on a, c’est : “il va falloir que la totalité des adultes prouvent qu’ils sont adultes à chaque clic qu’ils font” », note-t-il.

« Par défaut, vous êtes suspect »

Comme nous vous l’expliquions en mars dernier, une autre proposition de loi (instaurant une majorité numérique et visant à lutter contre la haine en ligne), promulguée en juillet dernier, évoque, pour les sites pornographiques, la mise en place, pour s’assurer de la majorité d’un utilisateur, d’une solution technique – qui n’existe pas encore. Il s’agirait de passer par un tiers de confiance qui délivrerait un jeton et qui confirmerait la majorité de l’utilisateur au site en question, vraisemblablement à chaque connexion.

Problème : cela fait des mois que cette solution serait expérimentée, sans qu’aucun retour ne fuite. Seule nouvelle récente de cette « expérimentation » : Jean-Noël Barrot, le ministre délégué chargé du Numérique, a concédé à demi-mot que cela ne fonctionnait pas pour l’instant à 100 %. « Je ne dis pas que ça marche à plein tube », déclarait-il le 20 septembre dernier, chez nos confrères de Tech&Co. Pour beaucoup, la solution évoquée dans la loi et expérimentée connaîtra la même trajectoire que celle observée au Royaume-Uni et en Australie. Ces deux pays avaient aussi voulu mettre en place un contrôle d’âge similaire, avant d’y renoncer, faute d’avoir trouvé une solution qui fonctionne.

Or, avec un tel système, « il y a cette idée qui s’instille dans la tête des gens que c’est à vous de prouver que vous êtes innocent parce que par défaut, vous êtes suspect. Et il vous faut le prouver en permanence, pas le prouver une et une seule fois, mais le prouver tout le temps, tout le temps, tout le temps », répète Benjamin Bayart.

Alors comment faire pour éviter ce changement de paradigme et voir à nouveau ces propositions trop rapides et parfois dangereuses finir dans le débat public ? Comment préserver nos libertés, tout en luttant contre le cyberharcèlement et en protégeant mieux les mineurs ?

Pourquoi il y a 120 magistrats dédiés aux problématiques numériques à Berlin, alors qu’à Paris, ils ne sont que six ?

Côté Parlement, il faudrait redonner du pouvoir aux députés, avance Benjamin Bayart. Et que ces derniers fassent davantage de contrôle de l’action publique, comme d’autres pays le font, estime Maître Archambault. « Plutôt que de passer par des nouvelles lois, on fait des commissions d’enquête, en demandant par exemple pourquoi il y a 120 magistrats dédiés aux problématiques numériques à Berlin, alors qu’à Paris, ils ne sont que… six. Pourquoi il faut trois ans – soit une éternité – pour que les harceleurs de Mila ou de Hoschi soient condamnés, alors qu’elles ont la chance d’être entourées de bons conseils ? »

Que les politiques écoutent davantage les experts, que les médias couvrent plus les cas de condamnation de cyberharceleurs, et qu’il y ait bien plus de moyens alloués à la justice et à la prévention du cyberharcèlement font partie des pistes d’amélioration citées. Mettre en place de grandes campagnes nationales, comme on l’a fait avec l’alcool dans les années 80, où on est passé « de la répression à la prévention », est également mentionné.

Dans ce débat, il ne faut surtout pas perdre de vue nos droits fondamentaux

Il faudrait aussi pouvoir lever plus facilement l’anonymat en cas de harcèlement en ligne ou de propos injurieux ou diffamatoires, explique Maître Barbry. Il s’agit d’un point de procédure qui a son importance. « Pour la suppression des contenus, on a de plus en plus recours aux moyens proposés les plateformes en ligne, mais c’est très aléatoire et nettement insuffisant. Et avoir la possibilité d’identifier les personnes est devenu très compliqué. Or, le seul moyen de réparer un préjudice, c’est d’avoir quelqu’un en face de vous qui répond de ses actes », relève-t-il. La raison ? Un changement des règles qui impose désormais, quand on cherche à savoir qui est derrière un compte de cyberharceleur, de passer par une procédure accélérée au fond et non plus en référé. « La procédure est plus juste d’un point de vue procédural, mais bien plus longue et complexe pour les victimes », résume-t-il.

Pour Pierre Beyssac, il faut désormais accepter que « le monde d’aujourd’hui avec les réseaux n’est plus celui d’autrefois. Il faut essayer de trouver des solutions adaptées, en évitant de céder à la tentation d’utiliser la technologie pour réduire nos libertés », souligne l’informaticien à la tête d’Eriomem. « Notre vie de tous les jours étant de plus en plus appuyée sur ces réseaux, nos droits fondamentaux vont en dépendre de plus en plus », estime-t-il. « *I*l est donc essentiel de les défendre bec et ongle, et de *t*out faire pour ne pas se tirer une balle dans le pied, sur la société que l’on veut construire ».

Permalink
October 7, 2023 at 5:05:08 PM GMT+2

Et maintenant, l'interdiction de certains VPN en France sur smartphone ? - Numeramahttps://www.numerama.com/politique/1518610-et-maintenant-linterdiction-de-certains-vpn-en-france-sur-smartphone.html

  • Politics
  • Police State
  • digital law
  • Privacy Rights
  • global spying
  • Spying
  • Politics
  • Police State
  • digital law
  • Privacy Rights
  • global spying
  • Spying

Et maintenant, l’interdiction de certains VPN en France sur smartphone ?

Julien Lausson : L'amendement qui assume son inefficacité

Certains VPN pourraient être exclus de l’App Store et de Google Play en France. Un amendement souhaite conditionner leur visibilité sur les boutiques d’applications pour smartphone au bon respect de la loi.

Nouvel assaut contre les VPN à l’Assemblée nationale, alors que le projet de loi sur la sécurisation et la régulation de l’espace numérique (dite loi SREN) entre en débat en séance publique à partir du 4 octobre. En effet, des députés du groupe Horizon et apparentés (centre droit) soutiennent un nouvel amendement qui entend exclure certains VPN de l’App Store (iOS) et de Google Play (Android).

L’objectif affiché de l’amendement est d’empêcher les internautes d’accéder à des applications de VPN qui permettraient « l’accès à un réseau Internet non soumis à la législation et règlementation française ou européenne ». Mais, considérant que les fournisseurs de ces VPN pourraient ne pas jouer le jeu, les députés misent donc sur Google et Apple pour faire le tri.

En effet, la proposition de mise à jour législative entend confier aux deux sociétés américaines la mission de faire la police sur chaque store. Ainsi, Google Play et l’App Store doivent s’assurer de ne garder que les applications en conformité avec la législation. Sinon, la boutique fautive s’exposera à une amende pouvant atteindre 1 % de son chiffre d’affaires mondial.

Ce n’est pas la première fois que la question des VPN est abordée dans le cadre du projet de loi SREN. En commission, un autre amendement — retiré depuis — avait été déposé pour interdire l’utilisation d’un VPN pour interagir activement sur un réseau social. En clair, l’utiliser pour consulter le site communautaire, oui. S’en servir pour publier ou commenter, non.

L’amendement, qui figure parmi le bon millier déposé en séance publique, et qui suit les 952 examinés en commission, doit encore être débattu, à supposer qu’il ne soit pas déclaré irrecevable par les services de l’Assemblée nationale. Les VPN font par ailleurs l’objet de deux autres amendements, mais qui demandent la production de rapports (amendements 662 et 916).

Des faiblesses techniques déjà relevées

Signalé sur X par le journaliste Marc Rees, l’amendement fait face à des limites techniques notables. D’abord, il n’adresse en aucune façon la possibilité de se servir d’un VPN sur son ordinateur. Or, les solutions du marché fournissent tout le nécessaire pour se connecter à un réseau privé virtuel depuis un PC sous Windows, Mac ou Linux.

Autre enjeu : il est possible d’installer une application en zappant la boutique officielle. Cela demande un peu d’huile de coude, et tout le monde ne saura pas (ou n’osera pas) installer un APK sur son smartphone, mais cela reste une manipulation accessible. Sur un strict plan de sécurité informatique, ce n’est toutefois pas le plus conseillé, si l’on ignore ce que l’on fait.

On peut installer une application mobile sans passer par une boutique officielle

Toujours est-il que c’est cette méthode qui est utilisée pour récupérer des applis avant l’heure (Threads, ChatGPT, Mario Kart), par exemple pour ne pas être soumis à un quelconque blocage géographique… C’est aussi d’ailleurs comme cela que l’on peut installer une vieille version d’une application mobile, pour assurer une compatibilité ou retrouver une fonctionnalité.

Les limites techniques apparaissent d’ailleurs admises par les parlementaires eux-mêmes, qui se disent « conscients de l’impossibilité technique d’encadrer le recours à des VPN, notamment dans un but de contournement de la loi ». Il s’agirait moins de proposer un amendement applicable que de soulever le sujet « afin, à terme, de trouver une solution technique pertinente et efficace. »

Permalink
October 4, 2023 at 1:50:19 PM GMT+2

Pornhub, YouPorn, Xvideos… Comment les « tubes » ont bouleversé le pornohttps://www.lemonde.fr/pixels/article/2023/10/03/pornhub-youporn-xvideos-comment-les-tubes-ont-bouleverse-le-porno_6192264_4408996.html

  • NSFW
  • Societal Collapse
  • Streaming
  • Pornography
  • NSFW
  • Societal Collapse
  • Streaming
  • Pornography

Pornhub, YouPorn, Xvideos… Comment les « tubes » ont bouleversé le porno

Par Pauline Croquet , Damien Leloup et Florian Reynaud

Publié hier à 20h00, modifié à 01h36 sur Le Monde

En quelques années, ces sites ont imposé comme modèle l’accès facile et gratuit à une masse de contenus pour adultes, souvent peu contrôlés. Ils sont aujourd’hui au cœur d’un rapport du Haut Conseil à l’égalité, mais aussi des débats sur les outils de vérification de l’âge des internautes, discutés mercredi à l’Assemblée nationale.

​ Août 2006. Alors que YouTube est encore un « petit » site peu connu (il ne sera racheté par Google que trois mois plus tard), un site pornographique d’un genre nouveau est discrètement mis en ligne : nom similaire, page d’accueil avec des vignettes imagées, possibilité de mettre en ligne ses propres contenus… YouPorn reprend les codes de ce qui deviendra le géant mondial de la vidéo.

​ Et le succès est immédiat. A la fin du mois de novembre, les administrateurs anonymes du site remplacent la page d’accueil par un message d’erreur : « Nous sommes à court de bande passante ! Nous avons utilisé 31 téraoctets en deux jours, sur 2 300 000 téléchargements [visionnages]. Nous cherchons des fournisseurs de serveurs illimités en Europe. »

​ Ces audiences, stratosphériques pour l’époque, ne passent pas inaperçues. L’année suivante, des dizaines de clones apparaissent en ligne, dont certains sont toujours présents aujourd’hui dans le top mondial des sites les plus visités. Pornhub, RedTube, Xvideos et Xhamster se lancent ainsi à quelques mois d’intervalle, marquant le début d’une guerre des « tubes » – le nom qui désignera les sites de ce type – qui durera des années.

​ Monter un site de ce type n’est pas particulièrement compliqué. Les outils techniques, et notamment les algorithmes de compression vidéo indispensables pour limiter les frais de bande passante, se démocratisent très vite. Trouver du contenu est également très simple : outre une poignée de vidéos amateur tournées et mises en ligne par les utilisateurs, l’écrasante majorité est pillée sur les sites payants ou d’autres « tubes ». Aucune modération, aucun contrôle de l’âge des internautes, aucun scrupule, mais des audiences gigantesques : le modèle des « tubes » bouscule en quelques années le monde de la pornographie en ligne.

Une oligarchie du porno

​ Dans le chaos de ces premières années, un homme va jouer un rôle déterminant : Fabian Thylmann. Ce jeune Allemand s’est spécialisé dans la publicité des sites pour adultes et investit dans plusieurs « tubes », dont il finit par prendre le contrôle, notamment Pornhub et YouPorn. Ses différentes plates-formes font à l’époque l’objet de dizaines de plaintes pour infraction au droit d’auteur, déposées par les sociétés de production et les studios du X.

​ Grâce à l’argent accumulé par ses sites, Fabian Thylmann résout en partie le problème… en s’offrant les entreprises qui l’attaquent. Brazzers, qui avait des accords avec certains « tubes », est rachetée en 2009 ; trois ans plus tard, le mastodonte Digital Playground et Reality Kings passeront également sous son contrôle.

​ L’ascension éclair de Fabian Thylmann est brutalement stoppée en 2012 lorsqu’il est interpellé et mis en examen pour fraude fiscale. Il revend l’année suivante son empire du porno – considéré comme le plus grand au monde – à deux autres entrepreneurs du secteur, les Canadiens Feras Antoon (cofondateur de Brazzers) et David Tassilo. C’est la naissance de MindGeek, aujourd’hui rebaptisé Aylo. Son principal concurrent, WGCZ Holding (Xvideos, Xnxx…), fondé par les Français Stéphane et Malorie Pacaud, rachète ou lance lui aussi en parallèle des studios en plus de ses sites de « tubes », comme Bang Bros et Legal Porno. Avec Wisebits Group (Xhamster, Stripchat), ces trois holdings contrôlent l’écrasante majorité des sites gratuits pour adultes les plus populaires au monde.

​ En quelques années, une poignée de plates-formes a ainsi révolutionné la manière dont les internautes accèdent aux vidéos pornographiques. « Quand les premiers “tubes” sont apparus, personne n’avait conscience d’à quel point ils allaient complètement modifier les habitudes de consommation de vidéos, se souvient Carmina, actrice et réalisatrice indépendante de films pour adultes et également rédactrice en chef du site spécialisé Le Tag parfait. Mais ça n’est pas propre au porno : c’était exactement la même chose pour YouTube ! »

Changement d’échelle et « prescription »

​ Des sites pour adultes existaient bien sûr avant l’arrivée de Xvideos ou Pornhub, mais les « tubes » ont solidifié l’idée que la pornographie était, au moins en partie, accessible gratuitement en ligne, donnant lieu à un changement d’échelle. « La montée des “tubes”, avec leur offre infinie de vidéos gratuites, a coïncidé avec le déclin global de l’économie de 2008 (…), et la très forte demande a entériné le fait que de nombreux utilisateurs ne seraient plus prêts à payer pour leur porno », expliquait en 2019 la chercheuse Margaret MacDonald (aujourd’hui au conseil consultatif de la maison mère de Pornhub) dans sa thèse sur l’industrie du porno.

​ L’avènement de ces plates-formes coïncide par ailleurs avec l’apparition des smartphones. Ensemble, ces deux éléments ont fait entrer le porno dans la sphère domestique, selon le chercheur Florian Vörös. « On peut le visionner en déplacement, dans les toilettes au travail, ça va changer les pratiques pour aller dans le sens d’une consommation qui s’insère dans les routines de la vie quotidienne », explique le sociologue à l’université de Lille.

​ Jusqu’à créer des addictions ? Répondant à un appel à témoignages du Monde, plusieurs internautes ont confié leur malaise devant leur propre consommation, mettant souvent en avant la facilité d’accès à la pornographie comme un facteur déterminant. Et sur le forum américain Reddit, certaines communautés rassemblent depuis des années des centaines de milliers de personnes exprimant une souffrance générée par une consommation massive de contenus pour adultes. Même si, d’un point de vue scientifique, l’addiction à la pornographie fait encore l’objet de nombreux débats.

​ Reste que les « tubes », en rendant tous les types de sexualité accessibles, les ont aussi rendus plus visibles. Côté pile, cela permet d’explorer librement sa sexualité, y compris lorsqu’elle va à rebours de tabous sociétaux : aux Etats-Unis, plusieurs études ont montré que la consommation de vidéos mettant en scène des personnes homosexuelles ou transgenres était proportionnellement plus forte dans les Etats les plus conservateurs. Côté face, ces plates-formes rendent aussi plus accessibles des contenus violents ou extrêmes, a fortiori lorsqu’elles sont mal, voire pas du tout, modérées.

​ Car, depuis leur apparition, une vive controverse existe sur leur caractère « prescripteur », faisant écho à un débat similaire plus large concernant l’impact de la pornographie sur la sexualité de ses consommateurs. Pornhub et ses concurrents sont ainsi régulièrement accusés de promouvoir, sur leur page d’accueil ou dans les suggestions de vidéos, une image violente, caricaturale ou malsaine de la sexualité, notamment avec le système des « tags ». Ces mots-clés peuvent aussi bien renvoyer à des descriptions physiques des actrices et acteurs qu’aux actes sexuels présentés ou au décor des vidéos. Nombre d’entre eux sont régulièrement dénoncés pour leur teneur raciste et misogyne, ou encore lorsqu’ils font la promotion de l’inceste.

​ « Certes, quand on a un intérêt spécifique, un “kink”, c’est hyperpratique pour trouver des vidéos correspondantes, reconnaît Carmina. Le problème, c’est qu’ils ont tendance à être fortement sexistes, racistes, classistes, hétéronormés… Catégoriser les corps avec des tags, ça peut être un problème ; mais les problèmes sur la vision du corps de la femme, ça n’est pas un débat inhérent au porno, c’est dans toutes les industries, dont le cinéma. »

La bataille du contrôle de l’âge

​ Devenus incontournables, la plupart des gros « tubes » ont tenté ces dernières années d’asseoir une forme de légitimité et d’engager un processus de « normalisation ». Certains ont mis fin au piratage à grande échelle de contenus, ont lancé des services pour rémunérer les « modèles », et tous ceux appartenant à de grands groupes ont conclu des accords avec des plates-formes payantes. En début d’année, MindGeek a même été racheté par Ethical Capital Partners, un fonds d’investissement canadien.

​ Mais tous sont encore aujourd’hui visés par des plaintes en justice et des enquêtes dans divers pays. De nombreux articles de presse récents ont montré que ces sites continuaient d’héberger des vidéos illégales, de violences sexuelles notamment, et ne répondaient pas, ou trop tardivement, aux demandes de suppression de contenus relevant du revenge porn (« pornodivulgation »), qui consiste à diffuser sans le consentement de quelqu’un des images censées rester privées. Le studio Legal Porno est notamment accusé d’imposer des conditions de travail et des scènes extrêmement violentes aux actrices : une enquête a été ouverte en République tchèque. En France, le scandale French Bukkake a révélé des pratiques violentes et un système de « prédation » organisé pour les tournages de ce site, en accès payant mais dont les vidéos se retrouvent aussi sur certains « tubes ».

​ De son côté, Pornhub a joué un rôle central dans le scandale Girls Do Porn, un studio accusé d’avoir monté un vaste système d’intimidation et de coercition pour imposer à des femmes de tourner des scènes en leur promettant une distribution confidentielle, alors même que les vidéos étaient diffusées sur des sites à forte audience. La plate-forme a également été impliquée en 2020 dans un autre scandale après la publication d’une enquête du New York Times sur la présence sur le site de vidéos pornocriminelles et de viols. A l’époque, Visa et Mastercard avaient coupé leurs services auprès de Pornhub, et l’entreprise avait supprimé en catastrophe une grande partie des contenus mis en ligne par ses utilisateurs.

​ Quatre plates-formes de « tubes » (Pornhub, Xvideos, Xnxx et Xhamster) sont directement visées par le récent – et controversé – rapport sur la pornographie du Haut Conseil à l’égalité entre les femmes et les hommes, publié en France le 27 septembre.

​ Plus menaçant encore pour ces sites, une bonne dizaine d’Etats américains et plusieurs pays, dont la France, ont cherché ces trois dernières années à leur imposer un contrôle strict de l’âge de leurs visiteurs. En Louisiane, où un mécanisme de vérification a été mis en place, Pornhub a « instantanément vu son trafic chuter de 80 % », expliquait à la mi-juillet au Monde Solomon Friedman, l’un des dirigeants de Aylo, propriétaire de la plate-forme. Dans l’Hexagone, le projet de loi visant à sécuriser et réguler l’espace numérique, débattu mercredi 4 octobre à l’Assemblée nationale, veut permettre le blocage des sites qui ne se plient pas aux règles, sans avoir à passer par une décision de justice.

​ Partout où elles sont proposées, ces lois imposant la vérification d’âge sont férocement contestées par les « tubes ». Système en double anonymat, empreinte de carte bleue, analyse faciale… si les modalités techniques d’un tel contrôle sont débattues, ce qui se joue en réalité est surtout leur modèle économique. Pour des sites qui ont bâti leur empire sur des revenus publicitaires et un océan de contenus accessible en deux clics, tout système de vérification constituera une entrave. Et donc une menace existentielle.

Permalink
October 4, 2023 at 1:36:01 PM GMT+2

La Chine confrontée au trafic des “visages volés” de l’intelligence artificiellehttps://www.courrierinternational.com/article/cybercriminalite-la-chine-confrontee-au-trafic-des-visages-voles-de-l-intelligence-artificielle

  • Artificial intelligence
  • Deepfake
  • Societal Collapse
  • Artificial intelligence
  • Deepfake
  • Societal Collapse

La Chine confrontée au trafic des “visages volés” de l’intelligence artificielle

Chantage à la fausse sextape, manipulations bancaires… En Chine, le développement de l’intelligence artificielle (IA) fait passer l’escroquerie en ligne à un niveau inédit. Dans une société où tout est enregistré, des caméras de surveillance à la reconnaissance faciale sur smartphone, les données relatives aux visages ou à la voix des individus se monnaient à vil prix sur Internet. Les victimes en “perdent la face” – littéralement.

Xinjing Bao par Wang Chang Traduit du chinois Publié aujourd’hui à 05h00 Lecture 9 min.

L’appel vidéo n’a duré que sept secondes. Assez, cependant, pour que Fang Yangyu soit persuadé que ce visage et cette voix étaient bien ceux d’un de ses proches. Et pour qu’il vire 300 000 yuans [près de 39 000 euros] sur un compte bancaire.

“En fait, tout était faux !” tranche le commissaire Zhang Zhenhua, du bureau de la sécurité publique de Shanghe [un district de la province du Shandong, dans l’est de la Chine]. “C’était une escroquerie par IA, comme on en voit beaucoup ces derniers temps.”

L’affaire s’est produite le 29 mai dernier : Fang Yangyu, qui réside à Jinan [la capitale du Shandong], regarde de courtes vidéos chez lui, quand il reçoit un message d’un inconnu qui se présente comme un membre de sa famille, et qui lui envoie son identifiant QQ [“Kioukiou”, du nom d’un des principaux réseaux de messagerie en Chine]. À peine Fang Yangyu a-t-il ajouté le contact qu’il reçoit un appel vidéo de celui qui a tout l’air d’être un de ses “cousins”.

Sous prétexte de la mauvaise qualité du réseau, son interlocuteur raccroche au bout de quelques phrases échangées. Leur conversation se poursuit dans le chat : le “cousin” explique qu’il doit de toute urgence transférer une somme d’argent, mais qu’il n’arrive pas à le faire directement. Il voudrait donc d’abord virer les fonds sur le compte de Fang Yangyu pour que celui-ci les transfère ensuite sur une carte bancaire donnée.

À l’autre bout de la Chine

Il lui envoie deux captures d’écran attestant du bon virement des sommes sur le compte de Fang Yangyu, qui s’étonne tout de même de n’avoir pas reçu de notification de sa banque. “Ça devrait arriver dans les vingt-quatre heures. De toute façon, les justificatifs bancaires font foi”, lui assure son “cousin”, qui fait doucement monter la pression. Face à ses demandes répétées, Fang finit par virer les 300 000 yuans sur le compte indiqué.

Peu après, son interlocuteur lui demande de transférer 350 000 yuans de plus. Fang Yangyu se méfie, se souvenant d’un message de sensibilisation aux arnaques ; il téléphone à un autre membre de sa famille [pour vérifier l’identité de ce “cousin”] et finit par découvrir le pot aux roses.

Le soir même, il prévient la police, qui constate que sa carte bancaire a été utilisée dans une bijouterie de la province du Guangdong [à l’autre bout de la Chine, dans le sud-est]. Le lendemain, la police locale interpelle six suspects dans la ville de Dongguan.

Elle découvre que le cerveau de cette escroquerie par IA se trouve dans le nord de la Birmanie. Les six individus arrêtés en Chine, eux, s’étaient organisés pour blanchir de l’argent au profit d’escrocs situés à l’étranger en se répartissant les tâches (achats d’or, versement de liquide à la banque, prises de contact en ligne, etc.).

La fuite de données, à la base du problème

Ces affaires d’escroqueries par IA interposée touchent tout le territoire chinois. Wang Jie, chercheur associé en droit à l’Académie des sciences sociales de Pékin, raconte avoir entendu parler pour la première fois de ce genre d’arnaque en 2019, lorsqu’un étudiant étranger avait cru échanger avec ses parents en visio alors que c’était un hypertrucage (aussi connu sous le nom anglais de deepfake) réalisé par des malfaiteurs. Avant cela, des affaires similaires de substitution de visages par IA à des fins frauduleuses avaient été traitées par les polices de Harbin (nord-est de la Chine) et de Fuzhou (sud-est) .

“Derrière les arnaques par intelligence artificielle, il y a toujours un problème de fuite de données”, souligne Wang Jie. Car, à l’ère de l’IA, la voix et le visage humains sont devenus des données qui peuvent se marchander et devenir source de profits.

De fait, nombreux sont ceux qui “perdent la face” sans s’en apercevoir. Il suffit pour cela de quelques secondes, comme en a fait l’amère expérience Pan Ziping, un habitant de la province de l’Anhui, dans l’est de la Chine.

Le 24 mars au soir, plongé dans la lecture d’un roman de fantasy sur son téléphone portable, il clique par inadvertance sur une publicité en voulant faire défiler le texte. L’action déclenche le téléchargement d’un jeu. Par curiosité, Pan Ziping essaie d’y jouer, puis désinstalle le programme, qu’il juge inintéressant.

Dix secondes fatales

Dans la foulée, il reçoit un appel téléphonique de l’étranger. Son interlocuteur affirme avoir accès à toutes les informations contenues dans son smartphone, en particulier sa galerie de photos et son répertoire. Il lui propose d’en parler sur QQ. Sans trop réfléchir, Pan Ziping l’ajoute donc à ses contacts. Dans la foulée, il reçoit un appel en visio. L’homme, qui n’a pas branché sa caméra, lui cite alors plusieurs noms de personnes figurant dans son carnet d’adresses, puis met fin à l’appel vidéo.

Quelques minutes plus tard, Pan Ziping reçoit par QQ une vidéo pornographique d’une dizaine de secondes : on y voit un homme nu en pleine action ; mais le visage de cet homme, c’est le sien. Pan Ziping est abasourdi : “C’est donc ça, la technologie d’aujourd’hui !” Alors qu’il est toujours interloqué, il reçoit un nouveau coup de téléphone, menaçant :

“Si tu ne me verses pas 38 000 yuans [près de 5 000 euros], j’envoie ta ‘petite vidéo’ à tout ton répertoire !”

À l’appui, l’homme joint une copie d’écran montrant que la vidéo est bien prête à partir ; un simple clic, et tous les amis, tous les contacts de Pan Ziping la reçoivent…

Pan Ziping partage alors son écran pour montrer à son interlocuteur qu’il n’a pas assez d’argent sur ses comptes Alipay et WeChat [nécessaires aux transferts d’argent]. L’homme diminue alors son prix, n’exigeant plus que 28 000 yuans, puis 18 000 et finalement 8 000 yuans [un peu plus de 1 000 euros]. Mais Pan Ziping est formel, c’est au-dessus de ses moyens. Son interlocuteur le pousse donc à emprunter les sommes nécessaires sur des plateformes de prêt en ligne.

Un jeu d’enfant

Pan hésite, prépare le transfert… Puis il finit par quitter l’appel et téléphone au 110 [le numéro d’urgence de la police]. Mais au bout du fil, l’agent refuse de recevoir sa plainte, au motif qu’il n’y a pas de préjudice avéré. Pan Ziping demande ce qu’il doit faire pour régler cette histoire de vidéo porno truquée par IA. On lui répond que la police n’a pas les moyens de la détruire. Et que la seule solution, pour lui, c’est d’envoyer un message collectif expliquant cette affaire à tout son carnet d’adresses.

Au fil de ses recherches, le chercheur Wang Jie a documenté de nombreux cas de pertes de données personnelles par des individus qui, après avoir consulté des sites web douteux, ont été victimes d’arnaques. Il estime que, avec les techniques actuelles, “capturer des données faciales est devenu un jeu d’enfant”. Elles sont collectées à notre insu par les caméras de surveillance omniprésentes, par les systèmes de détection faciale de nos smartphones ou encore par les applications qui demandent l’accès à nos galeries de photos.

En 2021, à Hefei [la capitale de l’Anhui], la police a débusqué un groupe de malfaiteurs qui se servaient de techniques d’intelligence artificielle pour trafiquer les visages de personnes sur des FMV [pour full motion videos, des scènes reconstituées à partir de fichiers vidéo préenregistrés]. Sur les ordinateurs des suspects, on a découvert une dizaine de gigaoctets de données faciales, qui ont changé de mains à de nombreuses reprises sur Internet – à l’insu, bien sûr, des personnes concernées.

Règlements inapplicables

Entre autres paliers franchis par les technologies de l’intelligence artificielle, les outils d’échange de visages par IA (aussi connus sous le nom face swap) sont désormais à la portée de tous.

Dès 2019, une application de ce genre appelée ZAO faisait fureur [en Chine], avant d’être retirée pour violation des droits d’auteur et atteinte à la vie privée, entre autres. Ses utilisateurs n’avaient qu’à fournir une photo de leur visage pour se retrouver, dans des vidéos, à la place de leur personnage de film ou de série préféré.

Spécialiste de droit pénal, Liu Xianquan met en garde contre les graves dangers qui peuvent résulter du détournement le plus anodin :

“En fait, ce n’est pas tant la technologie d’échange de visages par IA qui pose problème que la façon dont elle est utilisée.”

La Chine a mis en place, le 10 janvier dernier, un règlement limitant les services d’hypertrucage proposés sur Internet en Chine. Il stipule que les fournisseurs de ces services de deepfake ont pour obligation d’ajouter une fonction permettant d’identifier clairement le contenu comme étant issu d’un trucage numérique.

Par ailleurs, lorsqu’ils proposent des montages à partir de données biométriques comme la voix ou le visage d’un individu, ils sont tenus de prévenir leurs clients de l’obligation d’obtenir le consentement de cet individu. Problème : les techniques d’échange de visages par IA se monnayent bien souvent en catimini sur Internet, ce qui rend l’application de ce règlement particulièrement difficile.

Recréer les parties invisibles

On trouve des services en ligne proposant de changer les visages sur des photos pour 35, 50 ou 100 yuans [de 4,5 à 13 euros]. Pour les échanges de visages sur des vidéos, la tarification est à la minute, de 70 à 400 yuans [de 9 à 50 euros].

“Il est possible de changer n’importe quel visage”, indique l’un de ces marchands, qui se fait appeler “Zhang l’ingénieur”. Si un client lui fournit la photo ou la vidéo d’un visage, il est capable de l’intervertir avec celui d’une vedette, par exemple, mais aussi de “ressusciter” en vidéo des personnes mortes.

Zhang l’ingénieur ne propose pas seulement des prestations clé en main, mais aussi d’enseigner les techniques d’échange de visages. “Chez nous, on peut acheter un tutoriel et apprendre à tout faire soi-même”, indique-t-il. Il a lui-même développé un algorithme, qu’il vend 368 yuans sous forme d’extension sur la plateforme [de commerce en ligne] Taobao pour une utilisation illimitée pendant… cinquante ans !

Pour un rendu plus naturel, certains de ces marchands conseillent de fournir une photo de départ prise sous le même angle que celle de destination. Mais un autre vendeur affirme parvenir à un résultat criant de vérité avec juste une photo de face :

“Grâce au processus de ‘machine learning automatisé’, on peut reconstituer un visage dans ses moindres détails – y compris les parties invisibles.”

Le patron du studio de design vidéo Jielun, une boutique en ligne sur la plateforme WeChat, se présente comme un expert dans l’échange de visages par IA. Il montre avec fierté une vidéo de dix-neuf secondes qu’il a diffusée en mai dernier auprès de son cercle d’amis. Une femme vêtue d’un bustier, d’une minijupe et de bas noirs, s’y déhanche face à la caméra. Son visage ressemble en tout point à celui de la star [du cinéma et de la chanson] Yang Mi ; seul un léger décalage est décelable lorsqu’elle regarde vers le bas ou se tourne sur le côté.

Vingt euros la vidéo porno

Au studio Jielun, il faut compter 70 yuans la minute pour faire réaliser des vidéos ordinaires et 150 yuans [20 euros] pour des vidéos obscènes. Notre enquête confirme qu’il faut à peine deux heures de travail pour créer sur mesure une minute de vidéo porno truquée avec échange de visages.

Au cours de nos échanges, le patron du studio a demandé à plusieurs reprises à retirer des informations qu’il considérait comme “sensibles”. En revanche, il n’a jamais indiqué vouloir informer les “individus édités” de l’utilisation de leurs données faciales. Et, sur la vidéo truquée, il n’est nulle part fait mention d’un échange de visages par IA.

Mais le “commerçant” se retranche derrière ce qu’il appelle la “clause exonératoire de responsabilité” jointe à la vidéo. Elle stipule que “toute diffusion de matériel graphique ou vidéo est interdite, et le producteur n’en assume aucune conséquence. La vidéo est réalisée à des fins de divertissement uniquement, et nous ne pourrons en aucun cas être tenus responsables de l’utilisation des images et des vidéos, ni de tout autre dommage.”

Au Studio Jielun, on trouve également des applications ou des logiciels gratuits d’échange de visages par IA. Une rapide recherche sur TikTok suffit à découvrir de nombreuses offres publicitaires assorties de liens de téléchargement.

Le droit des victimes oublié

Ensuite, il suffit d’un clic : un clip publicitaire de vingt-cinq secondes se lance, après quoi, on peut utiliser gratuitement l’appli pour réaliser une vidéo truquée d’une dizaine de secondes, à partir de toute une série de courtes vidéos matricielles de célébrités ou de gens ordinaires, toutes disponibles sur la page d’accueil.

“C’est comme quand quelqu’un achète un couteau et commet un meurtre avec. Aurait-on l’idée d’en rejeter la faute sur le couteau ou sur celui qui l’a vendu ?”

Pour Gan Shirong, du cabinet d’avocats Huacheng de Pékin, ce n’est pas la technologie qui pose problème, mais l’utilisateur qui commet un acte illégal avec. Le juriste insiste, du reste, sur le fait que la vente “non encadrée” de ce genre de technologie augmente naturellement le risque de violation de la loi et rend son contrôle plus difficile.

Surtout, il est encore très compliqué de défendre les droits des victimes d’une violation d’identité par IA interposée. Comme le fait observer Liu Xianquan, d’un point de vue juridique, aucune réglementation pertinente n’existe actuellement sur l’utilisation et le développement des technologies d’intelligence artificielle.

Quant à Pan Ziping, il n’a finalement pas pu porter plainte après le vol de son visage et son utilisation dans une vidéo pornographique. L’affaire n’a pas eu de conséquence financière pour lui [puisqu’il a refusé le chantage], mais il n’a pu ni retrouver l’auteur du vol de son visage, ni empêcher la diffusion de la vidéo. Son seul recours a été d’envoyer un message collectif à tous les contacts de son répertoire pour leur demander de ne pas relayer la vidéo. Et, malgré les images, de ne pas croire à son contenu.

Permalink
October 4, 2023 at 10:42:39 AM GMT+2

Online tracking: Data harvesters came for your privacy – and found it | New Scientisthttps://www.newscientist.com/article/mg25934532-700-nowhere-to-hide-data-harvesters-came-for-your-privacy-and-found-it/

  • Tracking
  • Spying
  • Economy and markets
  • Big Data
  • Big Corpo
  • Privacy Rights
  • Tracking
  • Spying
  • Economy and markets
  • Big Data
  • Big Corpo
  • Privacy Rights

Le profilage de nos données personnelles a de réelles conséquences sur nos vies

Pourquoi cette offre d’emploi n’est-elle jamais arrivée jusqu’à vous ? Pourquoi n’obtenez-vous pas ce crédit ? La faute à vos données personnelles. Au-delà du profilage publicitaire, elles sont désormais utilisées pour déterminer votre façon de travailler, votre profil psychologique ou si vous êtes trop dépensier. Il est temps de reprendre le contrôle, affirme cette journaliste dans “New Scientist”.

En 2021, un vendredi, je suis entrée dans un hôtel d’Exeter, en Angleterre, à 17 heures, 57 minutes et 35 secondes. Le lendemain matin, j’ai conduit pendant neuf minutes pour me rendre à l’hôpital voisin. J’y suis restée trois jours. Le trajet de retour, qui dure normalement une heure quinze, m’a pris une heure quarante. Pourquoi cette vitesse ralentie ? Parce que je transportais mon nouveau-né à l’arrière.

Il ne s’agit pas d’un extrait de mon journal intime. C’est ce que Google sait du jour de la naissance de ma fille, rien qu’avec mon historique de géolocalisation.

Et les données personnelles amassées par d’autres entreprises ce week-end-là leur permettent d’en savoir beaucoup plus encore. Netflix se souvient que j’ai regardé plusieurs comédies légères, dont Gilmore Girls et Comment se faire larguer en 10 leçons. Instagram a noté que j’avais liké un post sur l’accouchement déclenché et que je ne me suis pas reconnectée pendant une semaine.

Et alors ? Nous savons tous aujourd’hui que la moindre de nos activités en ligne est suivie et que les données collectées sont extrêmement détaillées et s’accumulent en continu. D’ailleurs, peut-être appréciez-vous que Netflix et Instagram connaissent si bien vos goûts et préférences.

“Il y a de quoi être horrifié”

Pourtant, les enquêtes et procès se multiplient et dressent un tableau où la collecte de nos données a une incidence nettement plus insidieuse que ce que la plupart d’entre nous imaginent. En me penchant sur le sujet, j’ai découvert que la collecte de mes données personnelles pouvait avoir des conséquences sur mes perspectives professionnelles, mes demandes de crédit et mon accès aux soins.

Autrement dit, cette pratique a potentiellement des répercussions sur ma vie dont je n’ai même pas idée. “C’est un immense problème, et chaque jour il y a de quoi être horrifié”, résume Reuben Binns de l’université d’Oxford.

On pourrait croire qu’avec la mise en place en 2018 du RGPD (Règlement général sur la protection des données) – la loi européenne qui permet aux internautes de mieux contrôler la collecte et l’utilisation de leurs données personnelles –, les questions de vie privée ont été essentiellement résolues. Après tout, il suffit de ne pas accepter les cookies pour ne pas être pisté, non ? Alors que je tiens ce raisonnement devant Pam Dixon, représentante du World Privacy Forum, elle part dans un grand éclat de rire incrédule. “Vous croyez vraiment ça ? me lance-t-elle.

95 % des sites en infraction

Des centaines d’amendes ont déjà été infligées pour manquement au RGPD, notamment contre Google, British Airways et Amazon. Mais pour les spécialistes, ce n’est que la partie émergée de l’iceberg. Selon une étude menée l’an dernier par David Basin, de l’école polytechnique de Zurich, près de 95 % des sites Internet pourraient être en situation d’infraction.

Alors que la loi devait aider les citoyens à mieux comprendre de quelles données ils autorisent la collecte, plusieurs études montrent que les politiques de confidentialité des marques sont devenues de plus en plus complexes, et non l’inverse. Et si vous vous croyez protégé par les bloqueurs de publicité et les VPN qui masquent votre adresse IP, détrompez-vous. Bon nombre de ces services vendent également vos données.

Nous commençons à peine à mesurer l’ampleur et la complexité du problème. Une poignée de grandes entreprises – Google, Meta, Amazon et Microsoft – pèsent lourd dans l’équation, reconnaît Isabel Wagner, chercheuse en cybersécurité à l’université de Bâle, en Suisse. Mais derrière eux se cache une myriade d’acteurs, des centaines, voire des millions d’entreprises, qui achètent, vendent, hébergent, pistent et analysent nos données personnelles.

Qu’est-ce que cela signifie pour une personne ordinaire comme moi ? Pour le savoir, je me suis rendue à Lausanne, à HestiaLabs, une start-up fondée par Paul-Olivier Dehaye, mathématicien et principal lanceur d’alerte dans le scandale de Cambridge Analytica. Ce cabinet de conseil politique avait illégalement utilisé des données d’utilisateurs Facebook pour faire pencher l’élection présidentielle de 2016 en faveur de Donald Trump. L’enquête de Paul-Olivier Dehaye sur Cambridge Analytica a révélé jusqu’où s’étendait le pouvoir d’influence des vendeurs et acheteurs de données. C’est pour changer cela qu’il a créé HestiaLabs.

Avant notre rendez-vous, je demande à plusieurs entreprises de me fournir les données personnelles qu’elles ont enregistrées sur moi – une démarche plus laborieuse qu’on ne serait en droit de le croire depuis le RGPD. Puis, je retrouve Charles Foucault-Dumas, responsable de projet à HestiaLabs, dans les bureaux de la société, un modeste espace de coworking en face de la gare de Lausanne. Installés face à son ordinateur, nous chargeons mes données sur son portail.

Mes données s’affichent devant moi sous la forme d’une carte indiquant tous les endroits où je suis allée, tous les “j’aime” que j’ai distribués et toutes les applications ayant contacté une régie publicitaire. Sur les lieux que je fréquente régulièrement, comme la crèche de ma fille, des centaines de points de données forment de grosses taches colorées. Mon adresse personnelle est marquée par un énorme point, impossible à manquer. C’est édifiant. Et un peu terrifiant.

Fan de rugby, de chats et du festival Burning Man ?

Le plus surprenant est de découvrir quelles applications contactent des services tiers en mon nom. La semaine dernière, le comportement le plus coupable – 29 entreprises contactées – est venu d’un navigateur Internet qui se vante précisément de respecter votre vie privée. Mais, finalement, qu’il s’agisse d’un simple outil de prise de notes ou d’une appli de courses en ligne, à peu près toutes les applications de mon téléphone sollicitent en permanence des entreprises pendant que je vis ma vie.

En règle générale, une entreprise qui vend un produit ou un service s’adresse à une agence de communication faisant le lien avec des plateformes de vente, d’achat et d’échanges d’espaces publicitaires, elles-mêmes connectées à des régies publicitaires chargées de placer les annonces sur un média. Chaque fois que vous allez sur un site Internet ou que vous survolez un message sur un réseau social, toute cette machinerie se met en route – et produit plus de 175 milliards d’euros par an.

Quelles données personnelles ces entreprises s’échangent-elles ? Pour le savoir, il faudrait que je pose la question à chacune d’entre elles. Et même dans le cas de celles que j’ai pu contacter avec l’aide d’HestiaLabs, la réponse n’est pas toujours évidente.

Prenons l’exemple d’Instagram. Le réseau social liste 333 “centres d’intérêt” associés à mon profil. Certains sont pour le moins surprenants : le rugby, le festival Burning Man, le marché immobilier et même “femme à chats”. Ami lecteur, sache que je n’ai jamais eu de chat.

D’autres sont plus justes, et sans surprise : un certain nombre d’entre eux sont liés à la parentalité, qu’il s’agisse de marques comme Huggies ou Peppa Pig, de discussions sur les lits de bébé ou le sevrage. J’en viens à me demander de quelle manière ces données n’ont pas seulement influencé mes achats mais aussi la vie de ma fille. Sa fascination pour les aventures d’une famille de petits cochons roses est-elle entièrement naturelle ou nous a-t-on “servi” ces vidéos en raison de certaines de mes données personnelles transmises par Instagram ? Tous ces messages sur le sevrage sont-ils apparus spontanément sur mes réseaux sociaux – influant sur la façon dont j’ai initié ma fille à la nourriture solide – ou ai-je été ciblée ? Impossible de reconstruire les liens de cause à effet. J’ignore complètement si mes “centres d’intérêt” m’ont désignée pour d’éventuels démarchages.

Les échanges de données personnelles forment un écheveau quasiment impossible à démêler. Il n’est pas rare que des données soient copiées, segmentées et ingurgitées par des algorithmes et des systèmes d’apprentissage automatique. Résultat, explique Pam Dixon, même avec une législation comme le RGPD, nous n’avons pas accès à la totalité de nos données personnelles. “Il y a un double niveau à ce problème. Il existe une première strate, constituée par les données que nous pouvons retrouver, poursuit-elle. Et une seconde que l’on ne voit pas, que nous n’avons légalement pas le droit de voir, personne.”

Au-delà du ciblage publicitaire

De récents rapports offrent toutefois quelques aperçus. En juin, une enquête du journal américain The Markup a révélé que ce type de données cachées permettait aux publicitaires de nous catégoriser en fonction de nos affinités politiques, de notre état de santé et de notre profil psychologique. Suis-je une “maman accro à son portable”, une “bonne vivante”, “une facilement découragée” ou une “woke” ? Je n’en sais rien. Ce que je sais, c’est que toutes ces étiquettes sont effectivement utilisées par les régies publicitaires en ligne.

Il est perturbant d’apprendre que je suis ainsi étiquetée sans savoir pourquoi ni comment. Une part de moi se demande si c’est vraiment grave. Car je comprends l’intérêt d’avoir des publicités qui tiennent compte de mes préférences, ou d’ouvrir mon application de navigation et de voir apparaître les musées et les restaurants où je suis déjà allée ou qui sont susceptibles de me plaire. Mais, croyez-moi, la désinvolture avec laquelle nous acceptons ce marché est l’un des moyens les plus sûrs de faire grincer des dents un spécialiste de la vie privée.

D’une part, commence Pam Dixon, les utilisations de ces données vont bien au-delà du ciblage publicitaire. Il suffit d’un détail aussi insignifiant que l’enseigne où vous faites vos courses (être client d’une chaîne discount est un indicateur de faible revenu) ou l’achat d’un produit de sport (signe que vous faites de l’exercice) pour modifier votre profil de candidat à l’entrée d’une université ou le montant de votre prime d’assurance médicale. “On ne parle pas que de publicité ici, insiste-t-elle. C’est la vie réelle.”

Aux États-Unis, de récentes lois ont levé le voile sur les pratiques de certaines entreprises. Adopté en 2018 dans le Vermont, le Data Broker Act a ainsi révélé que les courtiers en données enregistrés dans cet État – mais également présents dans d’autres – vendaient des données personnelles à de potentiels employeurs ou bailleurs, souvent via des intermédiaires. En juillet, le bureau américain de protection financière du consommateur a découvert que des données cachées servaient à “noter” les consommateurs, un peu de la même manière que les banques vous attribuent une note financière globale lorsque vous faites une demande de prêt. Reuben Binns explique :

“Il y a les choses que vous faites, les sites que vous visitez, les applications que vous utilisez, tous ces services peuvent alimenter des plateformes qui vérifient si vous êtes un bon candidat à la location et quelles conditions de crédit vous proposer.”

À HestiaLabs, je comprends que j’ai peut-être moi aussi été affectée par ces pratiques dans mon quotidien, pas seulement à travers le ciblage publicitaire mais également par la façon dont mes données sont traitées par les algorithmes. En effet, sur LinkedIn, un des présupposés liés à mon profil indique que je ne suis ni “une personnalité de leader” ni “un manager senior”. Alors que j’ai dirigé une équipe de 20 personnes à la BBC et qu’avant cela j’ai été rédactrice en chef de plusieurs sites web de la chaîne – autant d’informations que j’ai spécifiquement compilées sur mon profil LinkedIn. Cela a-t-il une incidence sur mon évolution professionnelle ? Lorsque je pose la question à un représentant de la plateforme, on m’assure que ces “présupposés” ne sont aucunement utilisés “pour sélectionner les offres d’emploi qui [me] sont proposées sur ce réseau”.

Une protection de la vie privée qui laisse à désirer

Pourtant, plusieurs actions en justice ont révélé que, sur Facebook, des données étaient utilisées afin de cacher aux femmes certaines offres d’emploi dans le secteur des technologies. En 2019, la maison mère du réseau, Meta, a supprimé cette possibilité pour les annonceurs. Sauf qu’il est très facile de trouver d’autres moyens d’exclure les femmes, soulignent les spécialistes, par exemple en ciblant les profils comportant des intérêts associés à des stéréotypes masculins. “Ces préjudices ne sont pas visibles sur le moment pour l’utilisateur. Ils sont souvent très abstraits et peuvent intervenir très tard dans le processus de filtrage”, explique Isabel Wagner.

Plus le volume de données collectées augmente, plus la liste des problèmes signalés dans les médias s’allonge. Des applications de suivi d’ovulation – ainsi que des SMS, des courriels et des recherches sur Internet – ont été utilisées pour lancer des poursuites contre des femmes s’étant fait avorter aux États-Unis depuis la suppression de l’[arrêt Roe vs Wade](https://www.courrierinternational.com/article/carte-le-nombre-d-avortements-augmente-aux-etats-unis-malgre-l-arret-de-la-cour-supreme#:~:text=La décision de la Cour,avortements pratiqués dans le pays.) l’an dernier.

Des prêtres ont vu leur homosexualité dévoilée après qu’ils ont utilisé l’application de rencontre Grindr. Un officier russe a été tué lors de son jogging matinal après avoir été suivi, présume-t-on, par l’intermédiaire des données publiques de son compte Strava. La protection des données vise à empêcher ce genre de problèmes. “Mais de toute évidence la mise en œuvre laisse fortement à désirer”, soupire Reuben Binns.

Le problème tient en partie au manque de transparence des entreprises. Nombre d’entre elles optent pour des systèmes “protégeant la vie privée” où les données d’une personne sont segmentées en plusieurs points de données qui sont disséminés dans différents serveurs ou localement chiffrés. Paradoxalement, cela complique surtout la tâche pour l’utilisateur qui souhaite accéder à ses propres données et comprendre comment elles sont utilisées.

Du point de vue de Paul-Olivier Dehaye, le fondateur d’HestiaLabs, il ne fait aucun doute que les entreprises peuvent et doivent nous rendre le pouvoir sur nos données. “Si vous allez sur un site maintenant, une multitude d’entités en seront informées dans la seconde et sauront qui vous êtes et sur quel site vous avez commandé une paire de baskets il y a deux semaines. Dès lors que l’objectif est de vous inonder de mauvaises pubs, les entreprises sont capables de résoudre tous les problèmes. Mais demandez-leur vos données, et elles ne savent plus rien faire. Mais il existe un moyen de mettre cette force du capitalisme à votre service plutôt qu’au leur.”

J’espère qu’il a raison. Alors que je marche dans les rues de Lausanne après avoir quitté les bureaux d’HestiaLabs, je vois un homme devant la vitrine d’un magasin de couteaux, son téléphone portable dépassant de sa poche, puis une femme tirée à quatre épingles, un sac Zara dans une main et son portable dans l’autre. Un peu plus loin, un homme parle avec animation dans son téléphone devant le commissariat de police.

Pour eux comme pour moi, tous ces instants sont aussi brefs qu’insignifiants. Mais pour les entreprises qui collectent nos données, ce sont autant d’occasions à saisir. Des opportunités monnayables. Et tous ces points de données ne disparaîtront peut-être jamais.

Reprendre le contrôle

Suivant les conseils de Paul-Olivier Dehaye et des autres spécialistes que j’ai interrogés, je décide en rentrant chez moi de faire le tri dans mon téléphone et de supprimer les applications dont je ne me sers pas. Je me débarrasse également de celles que j’utilise peu et qui contactent un peu trop d’entreprises ; je les utiliserai depuis mon ordinateur portable à la place. (J’utilise un service appelé “TC Slim” qui m’indique quelles entreprises sont en lien avec mes applications.) J’installe également un nouveau navigateur qui respecte réellement – semble-t-il – ma vie privée. Les applications et navigateurs open source et non commerciaux sont généralement de bonnes solutions, explique Isabel Wagner, car leurs développeurs ont moins d’intérêt à collecter vos données.

J’ai également commencé à éteindre mon téléphone lorsque je ne m’en sers pas. Car la plupart des téléphones continuent à transmettre vos données de géolocalisation même lorsque vous coupez la connexion wifi et les données mobiles ou activez le mode avion. Sur mon compte Google, j’ai décoché l’option de sauvegarde des lieux, même si pour le moment une sorte de nostalgie m’empêche de demander la suppression de tous mes historiques.

On peut également modifier notre façon de payer. Pam Dixon qui préconise d’avoir plusieurs cartes bancaires et de choisir “minutieusement” lesquelles utiliser sur Internet. Pour les achats susceptibles d’envoyer un signal “négatif”, dans un magasin discount par exemple, préférez les paiements en liquide. Elle recommande également d’éviter les sites et applications liés à la santé. “C’est un terrain miné en général”, résume-t-elle. Malgré toutes les mesures que vous prendrez, les entreprises trouveront toujours des moyens de contourner vos garde-fous. “C’est un jeu où on ne peut que perdre”, conclut Paul-Olivier Dehaye. Raison pour laquelle la solution ne relève pas des seuls individus. “Nous avons besoin d’un véritable changement sociétal”, confirme Reuben Binns.

Si suffisamment de gens parviennent individuellement à faire entendre leur voix, nous pourrons faire évoluer le système, espère Paul-Olivier Dehaye. La première étape consiste à faire une demande d’accès à vos données personnelles. “Faites comprendre aux entreprises que si elles font un pas de travers vous ne leur ferez plus confiance, résume-t-il. À l’ère des données, si vous perdez la confiance des gens, votre entreprise est condamnée.”

Permalink
October 2, 2023 at 9:00:33 PM GMT+2

NFT : encore plus stupide — Pavillon rouge et noirhttps://p4bl0.net/post/2022/01/NFT-%3A-encore-plus-d%C3%A9bile

  • Crypto Currency
  • Blockchain
  • NFT
  • Crypto Currency
  • Blockchain
  • NFT

NFT : encore plus stupide

Par Pablo le jeudi 20 janvier 2022, 13h12

L’objectif d’un NFT est d’établir un titre de propriété. L’idée est de certifier une association entre une identité numérique (le propriétaire) et un objet (la propriété, le plus souvent numérique également), et d’utiliser la technologie de la blockchain pour stocker et distribuer ce certificat de propriété.

Ça commence mal : les personnes qui prétendent utiliser une blockchain pour établir des titres de propriétés mentent ou ne savent absolument pas de quoi elles parlent. Cela a été établi dans un précédent billet (dont je vous conseille la lecture avant celui-ci) où j’expliquais qu’une blockchain ne peut pas servir de source de vérité pour quoi que ce soit qui ne soit pas intrinsèquement “dans” ladite blockchain. Cette technologie n’a donc aucun avantage par rapport au papier[1], mais a par contre beaucoup d’inconvénients que le papier n’a pas (consommation énergétique, transactions acentrées impossibles, etc.).

Ce que prétendent apporter les NFT, c’est la décentralisation et l’absence de la nécessité de tiers de confiance. Cela suffit en soi à discréditer complètement l’idée des NFT, puisqu’il s’agit de titre de propriété sur une blockchain et que justement, l’utilisation d’une blockchain ne permet en réalité aucune décentralisation ni aucune suppression du besoin de confiance comme cela est démontré dans le billet cité précédemment. Mais comme l’indique le titre de ce billet : les NFT sont encore plus stupides.

NFT signifie “non-fungible token”, c’est à dire “jeton non-fongible” : il s’agit d’un morceau d’information non interchangeable, par opposition aux unités de cryptomonnaies par exemple. Quand on a 1 bitcoin, on a 1 bitcoin n’importe lequel, ils sont tous équivalents, au sens strict d’avoir la même valeur. Chaque NFT est unique et identifiable. Un billet de 10€ vaut la même chose que n’importe quel autre billet de 10€ (ou que n’importe quel ensemble de pièces et billets qui valent 10€) : les euros sont fongibles. En revanche si on décide que les billets de 10€ n’ont plus cours mais qu’on les garde et qu’on ouvre un marché des billets de 10€ où chacun est unique et identifié par son numéro de série, alors je peux espérer vendre plus cher mon billet numéroté 198357 que celui numéroté 840414 par exemple en disant que c’est un nombre premier et qu’il y a un nombre limité de tels billets (ce qui est faux puis stupide, mais je peux le dire quand même…).

Cette idée illustre ce qu’on entend par “non-fongibilité”. C’est ça qui fait que numéroter des objets en quantité limité augmente leur valeur[2] : non seulement ils sont rares mais en plus ils sont maintenant uniques puisque chacun a un numéro différent. Mais dans le cas des NFT, c’est encore plus stupide : des NFT, absolument rien n’empêche d’en créer plusieurs (et même autant qu’on veut) pour exactement le même objet (donc il peut y avoir plusieurs certificats de propriété pour une même œuvre), et n’importe qui peut en créer pour n’importe quoi (donc aucune garantie que la personne qui crée ou vend un NFT ait des droits sur la propriété de l’objet associé). Tout se passe comme si un même billet de 10€ (au sens de l’objet physique, nécessairement unique) pouvait avoir une infinité de numéros de série, et que c’est à ces numéros de série qu’on attribuait de la valeur, et potentiellement des valeurs différentes à chacun. Oui oui, ça n’a absolument aucun sens.

En fait c’est même encore plus stupide : l’objet associé à un NFT est généralement un objet numérique, dont la rareté n’existe donc pas[3] puisqu’il est transmissible par copie (par opposition à mon billet de 10€ que je n’aurais forcément plus en ma possession quand je l’aurais transmis à quelqu’un·e d’autre). Cela signifie que l’objet associé au NFT (et qui manifestement contribue à sa valeur sur le marché alors qu’on a déjà vu au paragraphe précédent que ça ne fait pas sens) peut lui même être répliqué infiniment. Cela peut sembler évident mais on a vu vraiment beaucoup de cas de personnes ayant acheté une image en NFT pour l’utiliser comme photo de profil sur des réseaux sociaux et traitant de voleurs les personnes qui récupéreraient cette image par un simple clic droit puis “enregistrer l’image sous…”, par exemple.

Toutes ces critiques sont déjà valides en admettant encore l’idée qu’un NFT serait effectivement un titre de propriété, mais en réalité c’est encore plus stupide. En principe, du moins du point de vue des défenseurs de cette technologie, posséder un NFT associé à l’objet (numérique ou non) X permet de dire « Je suis le propriétaire officiel de X, j’ai un certificat qui le prouve. ». Sauf que la notion propriété n’a absolument rien de naturelle, elle n’existe pas autrement que comme une construction sociale. La propriété peut résulter d’un rapport de force “brute”[4] ou d’un accord commun, mais dans tous les cas, il s’agit d’une forme de violence. Dans le premier cas le rapport de force doit être renouvelé sans arrêt. Dans le second cas il est nécessaire qu’une forme d’autorité extérieure fasse respecter l’accord aux différentes parties (avec un pouvoir de sanction en cas de non respect, ou un pouvoir absolu de contrainte). Et dans les deux cas, la notion de propriété n’existe et n’a de sens que pour la communauté concernée[5]. Bref, un titre de propriété n’a aucune valeur dans l’absolu si il n’y a pas une autorité tierce qui le fait appliquer, et lui donne par là même sa valeur. C’est vrai quand le titre de propriété prend la forme d’un bout de papier, mais c’est vrai aussi quand il prend la forme d’un NFT. En écrivant dans une blockchain que tel personne est propriétaire de tel objet, on a absolument rien fait de plus que si on avait écrit cette même affirmation sur du papier : ça n’a absolument aucune valeur tant qu’il n’y a pas une autorité tierce qui fait appliquer, qui rend vrai, ce qui est écrit[6]. Exit donc une fois de plus l’idée de décentralisation ou de désintermédiation…

Mais accrochez-vous car ce n’est pas fini : l’association d’un objet à un NFT ne se fait généralement pas directement sur la blockchain pour des raisons techniques (pour les objets physiques — montres de luxe, œuvres d’art, etc. — c’est évident ; et les objets numériques sont trop volumineux pour ça). Notez bien que même dans les très rares cas où l’objet est enregistré sur la blockchain, tout ce qu’on a dit jusque là s’applique parfaitement. Ce qui est stocké sur la blockchain est en fait le plus souvent un lien vers une page web[7] qui pointe à son tour vers l’objet associé au NFT. Ce qui signifie qu’on perd toute idée de décentralisation (qui est la raison d’être de cette technologie — même si cette croyance n’est que le fruit d’une énorme incompréhension comme on l’a déjà vu) puisqu’une plateforme centralisée est nécessaire pour faire le lien entre le NFT et l’objet associé. C’est déjà assez affligeant mais en fait c’est encore plus stupide : du fait de la centralité de ce tiers de confiance imposé, le NFT lui même est sujet au risque de pointer vers un lien mort dans le meilleur des cas (par exemple si le site de la plateforme disparaît ou change d’adresse). Mais cela peut être pire : le site pourrait se faire pirater ou simplement être remplacé plus tard par un autre qui ferait des associations fantaisistes, afficherait de la pub, tenterait d’infecter ses visiteurs avec des virus, ou se contenterait tout simplement de troller.

Il est donc assez clair que la technologie des NFT est purement et entièrement du vent et n’a aucune application sérieuse possible (en dehors d’enrichir les plus hauts étages d’une pyramide de Ponzi tout en accélérant le réchauffement climatique). Regardons tout de même de plus près le cas d’usage non purement spéculatif qui semble être le plus souvent mis en avant par les défenseurs de cette technologie : son utilisation dans un metavers ou le domaine du jeu vidéo (je vais parler de “monde virtuel” de façon général) pour des marchés d’accessoires “in game”.

Ce qui fait que cette idée semble fonctionner, c’est que dans le cas d’un monde virtuel dont on contrôle tout, on peut effectivement décider que la blockchain sur laquelle on enregistre les NFT est une source de vérité. Techniquement, ça fonctionne. Le monde virtuel peut complètement empêcher les participant·es qui ne sont pas identifié·es comme propriétaires d’un NFT de bénéficier de l’objet associé à celui-ci. L’entreprise qui édite le jeu, au travers de l’implémentation du monde virtuel, c’est à dire des règles écrites dans son code source, a ici le rôle de l’autorité tierce et centralisée qui a le pouvoir absolu de rendre vrai ce qu’elle veut, et donc entre autre ce qui serait écrit sur une blockchain. Si l’entreprise change d’avis, la vérité dans le monde virtuel change avec… Et c’est même encore plus stupide. Contrairement à ce qu’on peut régulièrement lire sur le sujet, cela ne permettrait absolument pas de transférer des objets d’un monde virtuel à l’autre si ce n’est pas prévu dans le code des mondes virtuels en question : si un jeu n’a pas prévu de code pour afficher un chapeau rouge sur votre avatar, vous ne pourrez pas y afficher un chapeau rouge sur votre avatar, même si vous être le “propriétaire” d’un NFT associé à l’idée d’un chapeau rouge et que le jeu prend en compte la blockchain sur laquelle c’est le cas par ailleurs. Les NFT ne permettent pas non plus un marché de revente d’objets entre joueur·ses au sein d’un même monde virtuel si celui-ci ne prévoit pas la possibilité de transfert de propriété (ce qu’il pourrait décider de ne faire qu’avec une taxe par exemple…). Bref, tout ce qui relèverait d’autre chose que du marché spéculatif de (re)vente de faux titres de propriété à des acheteurs crédules dépend entièrement de la volonté de l’entité qui contrôle le monde virtuel. On est donc bien dans un système entièrement centralisé, et il n’y a aucun avantage à utiliser des NFT et donc une blockchain pour ça. Techniquement il y a même de nombreux désavantages : ce sera plus coûteux en ressources et moins efficace qu’une simple base de données pour arriver au même résultat.

Mise à jour (22/01/2022) : il a été porté à mon attention que le terme “débile” est problématique car il semble être encore beaucoup attaché aux handicaps cognitifs, j’ai donc pris la décision de le remplacer par “stupide” qui a la même signification sans être validiste.

Notes

  1. ^ Je parle de papier un peu pour forcer le trait, mais la critique reste la même dans le monde numérique, avec les technologies qu’on utiliserait à la place d’une blockchain, qu’elles soient distribuées (dépôt Git, DHT, etc.) ou centralisées (comme une base de données tout à fait classique).
  2. ^ On ne parle ici que de valeur d’échange sur un marché de la rareté et en supposant une demande forte. La valeur intrinsèque, la valeur d’usage de ces objets, n’a évidemment aucune raison de changer parce qu’ils sont numérotés…
  3. ^ Il est possible d’essayer de créer de la rareté artificiellement sur des objets numériques, mais les NFT sont incapables de ça. La seule chose qui le permet sont les DRM (“digital rights maganement” ou MTP en français pour “mesures techniques de protection”), qui sont historiquement un échec cuisant au niveau technique, et qui ne peuvent absolument pas fonctionner sans tiers de confiance par ailleurs, ce qui annihile encore une fois l’intérêt potentiel des NFT.
  4. ^ Guerre de territoire dans les sociétés humaines, combat (ou juste pipi qui sent plus fort ^^) dans certaines communautés animales, par exemple.
  5. ^ Chez les animaux qui marquent leur territoire par exemple, la plupart des autres espèces (en tout cas celles avec qui il n’y a pas de rapport de prédation ou de coopération quelconque) n’ont probablement rien à faire des marqueurs de territoire, si tant est qu’elles soient capables de les interpréter. Il en va de même pour nos barrières et nos frontières (sinon on ferait des OQTF aux moustiques).
  6. ^ L’idée développée dans ce paragraphe est détaillée dans le billet évoqué plus haut : la vérité sur la blockchain.
  7. ^ Et seulement un lien, même pas de condensat cryptographique de l’objet permettant de s’assurer de son intégrité… Sauf dans quelques rares cas où le lien est un identifiant IPFS, mais ça ne change rien aux autres problèmes.

​ « La nécessité de la preuve de travail (ou d'enjeu)

Permalink
October 2, 2023 at 7:51:40 PM GMT+2

How Mastercard sells its ‘gold mine’ of transaction datahttps://pirg.org/edfund/resources/how-mastercard-sells-data/

  • Spying
  • global spying
  • Datafication
  • Economy and markets
  • Big Data
  • Spying
  • global spying
  • Datafication
  • Economy and markets
  • Big Data

How Mastercard sells its ‘gold mine’ of transaction data

Mastercard knows where people shop, how much they spend, and on what days - and it sells that information online.

R.J. Cross - Director, Don't Sell My Data Campaign, U.S. PIRG Education Fund; Policy Analyst, Frontier Group

Today, many of the companies we interact with on a daily basis have found a new revenue stream: selling their customers’ data. There are huge markets for personal data, bought by companies ranging from advertisers and tech companies, to hedge funds and data brokers.

Credit card data in particular is extremely valuable. Knowing how much people spend, where and on what day says a lot about consumers’ financial situations, their personal lives and the decisions they might make in the future.

In the last decade, Mastercard has increasingly capitalized on the transaction data it has access to in the course of being a payment network. Mastercard sells cardholder transaction data through third party online data marketplaces and through its in-house Data & Services division, giving many entities access to data and insights about consumers at an immense scale.

Mastercard is far from the only company engaged in data sales, nor is it necessarily the worst actor. But in its position as a global payments technology company, Mastercard has access to enormous amounts of information derived from the financial lives of millions, and its monetization strategies tell a broader story of the data economy that’s gone too far.

Mastercard sells data on third party data marketplaces

Mastercard sells bundles of cardholder transaction data to third party companies on large online data marketplaces. Here, third parties can access and use information about people’s spending to target advertisements to individuals, build models that predict consumers’ behavior, or prospect for new high-spending customers.

For example, Mastercard’s listing on Amazon Web Services Data Exchange states that companies can access data like the amount and frequency of transactions, the location, and the date and time. Mastercard creates categories of consumers based on this transaction history, like identifying “high spenders” on fast fashion or “frequent buyers” of big ticket items online, and sells these groupings, called “audiences”, to other entities. These groups can be targeted at the micro-geographic level, and even be based on AI-driven scores Mastercard assigns to consumers predicting how likely they are to spend money in certain ways within the next 3 months.

The data Mastercard monetizes on these marketplaces is in aggregated and anonymized bundles. Aggregating and anonymizing consumer data helps cut down on some of the risks associated with data monetization, but it does not stop reaching people on an individual level based on data. High-tech tools connected to these third party data marketplaces allow companies to target and reach selected individuals based on traits like past spending patterns or geographic location.

Mastercard is a listed data provider on many of the major online data marketplaces. In addition to Amazon Web Services Data Exchange, Mastercard has listings on Adobe’s Audience Marketplace, Microsoft’s Xandr, LiveRamp, and Oracle’s BlueKai, among others. Selling data on even one of these makes consumer transaction behavior available to a significant number of entities.

Mastercard has established its own data sales division

In addition to data sales on third party marketplaces, Mastercard also has its own Data & Services division. Here, Mastercard advertises access to its databases of more than 125 billion purchase transactions through its more than 25 data services products. Some products give companies the chance to pay for cybersecurity and fraud detection tools. Others are focused on the monetization of consumer information for AI-driven consumer modeling and highly-targeted advertising.

For example, Intelligent Targeting enables companies to use “Mastercard 360° data insights” for identifying and building targeted advertising campaigns aimed at reaching “high-value” potential customers. Companies can target ads to selected consumers with profiles similar to Mastercard’s models – people it predicts are most likely to spend the most money possible.

Another data services product, Dynamic Yield, offers dashboard tools allowing companies to “capture person-level data” of website or app users, do A/B consumer testing, and “algorithmically predict customers’ next purchase with advanced deep learning and AI algorithms”. One of Dynamic Yield’s data products, Element, advertises that companies can “[l]everage Mastercard’s proprietary prediction models and aggregated consumer spend insights to deliver differentiating personalization that caters to each users’ unique habits and expectations like never before.” While the transaction data Mastercard offers may be aggregated, it’s clearly used to identify targets and reach them at the individual level.

Another example is SessionM, Mastercard’s customer data management platform product, allowing companies to combine their first-party data with data from other sources to create “360 degree” profiles of consumers that can be updated in real time based on purchases.

“That gold mine of data”: Mastercard has been building its data monetization capabilities for over a decade

In the last 15 years, Mastercard’s data monetization strategies have been a growing part of its revenue stream. In 2008, Mastercard’s then head of Global Technology and Operations said in an interview that a big question for Mastercard was how to “leverage that gold mine of data that occurs when you have 18.7 billion transactions that you’re processing.” By 2013 the company had established an in-house data monetization division – then called Information Services – and was approaching online advertising and media desks about opportunities to leverage its then reportedly 80 billion consumer purchases data. In 2018, Bloomberg reported that Mastercard and Google made a deal to provide credit card data for Google’s ad measurement business.

Recently, corporate acquisitions have helped drive Mastercard’s data revenue growth. In 2019, MasterCard acquired the AdTech platform SessionM, and in 2021 bought the AI company Dynamic Yield from McDonald’s. We briefly outline both platforms in the section above.

Selling data can harm consumers

Almost every company we interact with collects some amount of data on us. Often it’s more information than they really need – and it’s often used for secondary purposes that have nothing to do with delivering the service we’re expecting to get. This way of doing business unnecessarily increases the risks for regular people whose data has become a commodity, often without their knowledge.

Security and scams

When companies engage in data harvesting and sales to third parties, it increases the personal security risks for consumers. The more companies that hold a person’s data, the more likely it is that information will end up exposed in a breach or a hack. Once exposed, consumers are much more likely to become the victim of identity theft or financial fraud, and experience serious damage to their credit score.

Data sales also increase the odds scammers will gain access to personal data, allowing for the construction of targeted predatory schemes. Data brokers that often rely on other companies’ collection of consumer data have furnished scammers looking to find ideal victims with data, like identifying patients with dementia for targeting with fake lottery scams.

Annoying and invasive targeted advertising

Data sales often flow into the advertising industry, fueling the inundation of people’s screens with ads they didn’t ask to see that range from annoying to creepily invasive. In the 1970s, the average American saw between 500-1,600 ads a day; today, powered by data-driven online advertising, it’s now estimated at 5,000 ads daily, spanning across traditional ads on TV, radio and billboards, and targeted digital ads on websites, social media, podcasts and emails.

Advertising often encourages consumers to spend more money on purchases unlikely to shore up their financial health in the long-term. Americans currently owe more than $1 trillion in credit card debt – a record high. In today’s market with rising interest rates, endless data-driven appeals to spend more money play an increasingly unhelpful and potentially dangerous role in people’s lives.

While consumers have official government channels for opting out of junk calls and junk mail, there’s little consumers can do to protect their screens from unnecessary annoying, distracting and invasive ads they didn’t ask to see and didn’t give permission to have their data fuel.

Even aggregated and anonymized data can cause harm

Some tools companies use to protect privacy are not as secure as they sound, like aggregation and anonymization. A 2015 MIT study found this was the case with anonymized credit card data. Using an anonymized data set of more than 1 million people’s credit card transactions made over 3 months, MIT researchers could identify an individual 90% of the time using the transaction information of just 4 purchases. Data that’s provided in batches also has its limitations. For instance, providing data by micro-geography, like zip+4, can in some cases end up being so specific as to point to a specific address.

Additionally, just because data is aggregated and anonymized does not mean consumers aren’t being singled out for their purchasing habits. Using high-tech automated tools, anonymized and aggregated data can be used to reach specific consumers with tailored messages or help predict a given individual’s behavior.

Mastercard should commit to a limited use data policy

Companies have taken data harvesting and sales too far. The collection and sale of people’s data is almost entirely unregulated, and virtually every major company has begun monetizing customer data in ways people are not expecting.

Mastercard should commit to a policy of limited data use by implementing the principles of data minimization and purpose specification. This would mean collecting only the data necessary for providing the services cardholders are expecting to get – access to a safe and reliable credit card – and using the data only for that purpose.

PIRG has launched a coalition with Accountable Tech, American Civil Liberties Union, Center for Digital Democracy, Electronic Freedom Foundation, the Electronic Privacy Information Center, Oakland Privacy and Privacy Rights Clearinghouse asking Mastercard to commit to a limited data use policy.

Mastercard has served as people’s credit card long before it was able to use and sell transaction data in all of the ways that modern technology enables. Growing its profit margin is not a compelling reason for Mastercard to contribute to the massive marketplaces for data.

Passing new consumer data laws and having strong enforcement will be key to curtailing today’s invisible economy for people’s data. This is an urgent task. In the meantime, companies should voluntarily implement limited use data policies, and bring their business models back in line with consumer expectations.

Permalink
October 2, 2023 at 7:35:29 PM GMT+2
1 / 3
Links per page
  • 20
  • 50
  • 100
130 shaares · Shaarli · The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community · Documentation · Theme : Stack · Font : DINish
Fold Fold all Expand Expand all Are you sure you want to delete this link? Are you sure you want to delete this tag? The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community