Delete Set public Set private Add tags Delete tags
  Add tag   Cancel
  Delete tag   Cancel
  • Cyberpunk is the way
  • My links
  • Tags
  • pics
  • Daily
  • RSS
  • Login
Filter untagged links

Amazon buys nuclear-powered data centre from Talenhttps://www.neimagazine.com/news/amazon-buys-nuclear-powered-data-centre-from-talen-11597723/

  • Big Corpo
  • Artificial intelligence
  • AWS
  • Nuclear
  • Big Corpo
  • Artificial intelligence
  • AWS
  • Nuclear

Amazon buys nuclear-powered data centre from Talen

US-based Talen Energy Corporation has sold its Cumulus data centre campus in Pennsylvania to Amazon subsidiary Amazon Web Services (AWS) for $650m. This includes a long-term agreement to provide power from Talen's Susquehanna NPP. The 2,500 MWe adjacent Susquehanna Steam Electric Station currently supplies power to the data centre.

13 March 2024

The $650m will be paid in stages – $350m on closing and $300m to be released on the attainment of development milestones later this year. Talen will also receive additional revenue from AWS related to sales of Susquehanna's energy to the grid.

“We believe this is a transformative transaction with long term benefits,” said Talen President & CEO Mark “Mac” McFarland, in a call with investors and media. As power demand continues to rise worldwide, “data centres are at the heart of that growth,” he noted.

Texas-based Talen is the majority owner and operator of the Susquehanna plant with 90% owned and operated by Talen subsidiary Susquehanna Nuclear. Allegheny Electric owns the other 10%. The plant’s two General Electric boiling water reactors began operation in 1983 and are licensed to operate until 2042 and 2044. In 2022, Talen filed for Chapter 11 bankruptcy as part of a financial restructuring, exiting bankruptcy in 2023. The transaction with AWS is expected to boost to its cash flow. After paying off debts, interest and other costs, Talen expects net proceeds of $361m from the deal.

The Cumulus campus is directly connected to the NPP. The data centre's four substations have a total potential 960 MW of redundant capacity. This includes 200 MW currently associated with the Nautilus cryptocurrency facility, in which Talen will retain its 75% interest. A further 240 MW of redundant capacity for data centres is expected to be ready this year. The campus has a "robust and redundant" fibre network.

According to Talen Energy’s investor presentation, it will supply fixed-price nuclear power to AWS’s new data centre as it is built. AWS has minimum contractual power commitments increasing in 120 MW increments over several years. AWS has a one-time option to cap commitments at 480 MW and two 10-year extension options tied to nuclear licence renewals.

"Power demand is growing for the first time in years, and AI and data centres are at the heart of that growth," McFarland said. "Data from the International Energy Agency suggests that energy demand from data centres, AI and cryptocurrencies could more than double over the next three years."

He added that the transaction will benefit the wider community by creating jobs and catalysing economic development as well as strengthening the Susquehanna plant itself as a major employer and significant taxpayer.

Permalink
June 12, 2024 at 9:31:32 PM GMT+2

En Chine, Internet est en train de disparaître | Le Grand Continenthttps://legrandcontinent.eu/fr/2024/06/02/en-chine-internet-est-en-train-de-disparaitre/

  • Datafication
  • Cultural Downfall
  • Data durability
  • Datafication
  • Cultural Downfall
  • Data durability

En Chine, Internet est en train de disparaître

« Si Internet est l'avenir de la civilisation, alors notre génération n’aura pas d'histoire — parce qu'Internet n'aura laissé de nous aucune trace. » On a longtemps cru qu’Internet serait la plus puissante des bibliothèques. La mémoire exhaustive de l’humanité. Le web chinois, dont les contenus sont désormais davantage produits et consommés depuis des téléphones, démontre le contraire : Internet n’archive rien. He Jiayan dissèque une révolution aux conséquences aussi violentes que souterraines.

2 juin 2024 • Numérique Auteur Le Grand Continent

« Nous avons longtemps conçu Internet comme une mémoire — nous ne savions pas que c’était celle d’un poisson rouge. » En une décennie, alors même qu’il devenait de plus en plus universel par son nombre d’utilisateurs, Internet a perdu plus d’un tiers de son contenu.

Pour la Chine, cet effondrement archivistique est encore plus important. He Jiayan, journaliste chinois spécialisé dans le web et le secteur des nouvelles technologies, travaille à partir d’archives informatiques datant d’avant l’Internet mobile. Il montre que le durcissement politique chinois est loin d’être le premier facteur de la disparition de pans entiers du web. L’autocensure et la course à la rentabilité ont conduit souterrainement à la disparition d’une quantité énorme d’informations des serveurs. À l’heure où l’Internet mobile semble être en train de remplacer le premier âge d’Internet, cette disparition semble peu visible — elle est pour autant tout à fait réelle.

Au-delà de la dimension technique, Jiayan alerte sur la disparition d’une mémoire commune constitutive d’une culture populaire encore vivante, confiée à des serveurs qu’on croyait immortels — et qui menace aujourd’hui de s’effondrer. Pour toute une génération qui a immatériellement construit sa vie dans l’Internet des vingt premières années du XXIe siècle, le risque très concret est de se retrouver avec deux décennies sans mémoire.

Commençons par une petite expérience.

He Jiayan

Si l’on cherche « Jack Ma » sur Baidu et qu’on fixe une période de référence courant de 1998 à 2005, combien d’informations apparaîtront ? Plutôt 100 millions, 10 millions ou 1 million ?

J’ai posé la question à plusieurs groupes et l’opinion majoritaire considère toujours que l’ordre de grandeur se situerait entre quelques millions et quelques dizaines de millions. Après tout, Internet est si vaste. Jack Ma, l’ancien entrepreneur haut en couleur, a dû y laisser beaucoup de traces.

En réalité, pour une recherche Baidu sélectionnant la plage de dates « 22 mai 1998 à 22 mai 2005 », le total des résultats contenant l’expression « Jack Ma » s’élève à 1.

Et ce seul résultat est lui-même faux : en cliquant dessus, on constate que l’article a en fait été publié en 2021, c’est-à-dire en dehors de la période délimitée ci-dessus. S’il apparaît dans notre recherche, ce n’est que de manière inexplicable.

En d’autres termes, si l’on veut connaître les expériences de Jack Ma, ses relations, les discussions à son sujet, ses discours ou l’histoire de son entreprise Alibaba pendant cette période, la quantité d’informations brutes valables qu’on obtient sur Internet est nulle.

Pour une recherche Baidu sélectionnant la plage de dates « 22 mai 1998 à 22 mai 2005 », le total des résultats contenant l’expression « Jack Ma » s’élève à 1.

He Jiayan

Peut-être est-ce un problème de Baidu ? Peut-être qu’en utilisant Bing ou Google, la recherche est davantage probante ?

J’ai essayé : ces deux sites affichent bien des résultats valides — un peu plus que Baidu — mais seulement une petite dizaine. Il y a aussi davantage de résultats invalides qui n’entrent pas dans la période sélectionnée — probablement pour une raison purement technique.

On peut alors se demander si ce n’est pas parce que Jack Ma est controversé en Chine que ces résultats ne sont pas consultables.

Ce serait vrai si c’était seulement le cas de Jack Ma. Mais j’ai aussi fait des recherches pour la même période sur Ma Huateng, Lei Jun, Ren Zhengfei, et même Luo Yonghao et Sister Fu Rong — qui étaient à l’époque des célébrités sur Internet — ou encore Jay Chou, Li Yuchun — qui étaient alors des stars de la musique. Les résultats sont du même ordre.

He Jiayan cite des hommes d’affaires chinois célèbres faisant partie des BATX, l’équivalent des GAFA chinois : Jack Ma est le fameux fondateur d’Alibaba, disparu soudainement est désormais exilé hors de Chine ; Ren Zhengfei est le fondateur et PDG de Huawei Technologies ; Ma Huateng, celui de Tencent Holdings ; et Lei Jun, le fondateur de Xiaomi. Lyo Yonghao et Sister Fu Rong sont des blogueurs célèbres en Chine dans les années 2000.

Dans le cas de la recherche sur Lei Jun, après avoir testé différents sites web, différents noms de personnes et différentes périodes, j’ai découvert un phénomène étonnant. Presque toutes les archives des sites web chinois qui étaient populaires à l’époque — tels que NetEase, Sohu, SMTH BBS, Xizi Hutong, Keyhole Forum, Tianya Club, RenRen Network, Sina Blogs, Baidu Tieba — ont disparu. Un grand nombre de contenus sur des sites web personnels ont également disparu sur une période ancienne, et la plupart des sites web ont même disparu pour toutes les périodes. La seule exception est Sina.com, où l’on peut encore trouver des informations datant de plus de dix ans — mais en nombre assez limité. Plus de 99,9999 % du contenu a disparu.

Nous avons longtemps conçu Internet comme une mémoire — nous ne savions pas que c’était celle d’un poisson rouge.

He Jiayan

Nous sommes en train de passer à côté d’un problème préoccupant : l’Internet chinois s’effondre de manière inéluctable — et le contenu qui existait avant l’émergence de l’Internet mobile a aujourd’hui presque complètement disparu.

Nous avons longtemps conçu Internet comme une mémoire — nous ne savions pas que c’était celle d’un poisson rouge.

Au cours des deux dernières années, je me suis forgé une conviction très claire : le nombre d’informations que l’on peut trouver en ligne se réduit chaque année à une vitesse vertigineuse. Il n’y a pas si longtemps, je pouvais encore trouver certaines sources originales. C’est progressivement devenu impossible. Je pouvais encore découvrir les discours des protagonistes ou les articles qu’ils avaient écrits — puis je ne les ai plus trouvés. Je pouvais encore voir de nombreuses interviews ou des vidéos en ligne — elles ont été effacées progressivement.

Au cours des deux dernières années, je me suis forgé une conviction très claire : le nombre d’informations que l’on peut trouver en ligne se réduit chaque année à une vitesse vertigineuse.

He Jiayan

Tout se passe comme si un monstre d’un nouveau genre était apparu, qui dévore les pages web tout au long de l’histoire, les avalant du passé vers le présent, d’abord par petites bouchées, puis par grandes bouchées, dévorant tout l’Internet chinois — cinq ans par-ci, dix ans par là.

Lorsque l’on regardera en arrière, on constatera que tout ce qui existait dans l’Internet chinois avant la création du mobile — qu’il s’agisse d’un portail, du site officiel d’une organisation, d’une page web personnelle, de forums publics, des blogs Sina, de posts Baidu, de documents, de photos, de musique, de vidéos, etc. — aura disparu.

Je me souviens qu’il y a plus de dix ans, j’avais l’habitude de changer d’ordinateur parce que mes photos et mes articles compressés étaient stockées sur un serveur. Quelques années plus tard, j’ai découvert que l’entièreté du serveur avait disparu. J’avais l’habitude d’utiliser hotmail, et puis tout a disparu. J’ai également écrit des messages sur RenRen et MySpace… tous disparus.

Tout se passe comme si un monstre d’un nouveau genre était apparu, qui dévore les pages web tout au long de l’histoire.

He Jiayan

Nous pensions qu’Internet pouvait tout sauvegarder. Il s’est avéré qu’il ne peut rien garder.

Pourquoi cela s’est-il produit ?

Je pense qu’il y a deux raisons principales.

La première est d’ordre économique.

Un site Internet nécessite de la bande passante, une salle de serveurs, du personnel pour le gérer et l’entretenir, ainsi qu’une quantité non négligeable de coûts réglementaires et d’entretien divers. S’il existe une valeur stratégique — par exemple, informer de ses activités pour une entreprise — ou une valeur de trafic à court terme — par exemple, s’il y a toujours des gens qui viennent voir le site de temps en temps —, et si en même temps les comptes de l’entreprise ne sont pas mauvais, alors il y aura une raison de maintenir le site en vie.

Mais si l’entreprise est sur une mauvaise pente et n’a plus d’argent, c’est tout le site qu’elle gère qui mourra, tout simplement. Renren en est un exemple typique.

D’un point de vue opérationnel, si une page web n’est pas visitée par quelques personnes tout au long de l’année, elle deviendra un fardeau pour l’entreprise, et le plus rationnel d’un point de vue économique sera de la supprimer — et ce même si l’entreprise n’a pas de problèmes financiers. Les premières années de Sohu ; le site NetEase dont le contenu a été essentiellement perdu ; ainsi que la disparition collective des forums hébergés dans Tianya en sont de bons exemples.

Si une entreprise est sur une mauvaise pente et n’a plus d’argent, c’est tout le site qu’elle gère qui mourra.

He Jiayan

Deuxièmement, les raisons réglementaires.

En général, la réglementation sur Internet augmente progressivement et devient de plus en plus stricte. Le contenu qui pouvait exister légalement auparavant ne répond plus aux exigences réglementaires ; et ce qui pouvait exister dans la grise auparavant a depuis basculé dans la zone rouge. Tous ces contenus sont directement supprimés.

Il y a aussi des cas où la polarisation de l’opinion publique est devenue plus forte avec le temps et où un contenu qui était « tout à fait normal » est devenu très sensible pour l’opinion publique. Même s’il n’est pas illégal, il peut intensifier le conflit et créer de la confusion — si bien que le régulateur peut demander qu’il soit éliminé.

Au-delà des autorités officielles, les internautes en colère agissent parfois comme des modérateurs ou au contraire des faiseurs d’opinion. Ils peuvent faire sortir de l’ombre quelque chose que quelqu’un a posté en passant il y a plus de dix ans, s’y accrocher et cyber-harceler l’auteur jusqu’à sa « mort sociale ».

En Chine, l’effet le plus important de la réglementation n’est donc pas tant ce que font les régulateurs ou les attaques des internautes en colère que l’« autocensure » qu’ils provoquent au sein des entreprises et chez les particuliers.

En Chine, l’effet le plus important de la réglementation n’est donc pas tant ce que font les régulateurs ou les attaques des internautes en colère que l’« autocensure » qu’ils provoquent au sein des entreprises et chez les particuliers.

He Jiayan

Personne ne sait si un contenu sur un site web ou un mot prononcé par quelqu’un entraînera la mort de cette personne dans plusieurs années. Le meilleur moyen de survivre est donc de supprimer toutes ces « bombes à retardement » potentielles — c’est-à-dire de fermer le site web ou d’en supprimer tout le contenu.

Bien entendu, les autres causes sont nombreuses qui peuvent expliquer la disparition d’anciennes pages web.

Peu après la dissolution de l’ex-Yougoslavie, tous les contenus web hébergés sous le nom de domaine international « .yu » — abréviation de Yougoslavie — ont disparu. Un autre exemple est la disparition de sites de musique et de films qui étaient autrefois facilement disponibles pour le téléchargement, en raison du renforcement de la protection des droits d’auteur. Certaines organisations et des individus, pour des raisons purement personnelles, ne veulent parfois simplement plus montrer leurs informations au public et ferment donc leurs sites web officiels ou leurs pages d’accueil personnelles.

Mais ces raisons sont secondaires et localisées.

La disparition systématique et à grande échelle du contenu Internet dans son ensemble est principalement due aux lois économiques et à l’autocensure.

La disparition systématique et à grande échelle du contenu Internet dans son ensemble est principalement due aux lois économiques et à l’autocensure.

He Jiayan

Au fond, tout se passe comme si le contenu d’Internet — à l’instar de la vie — était régi par la théorie de l’évolution. Il n’a qu’un seul critère d’existence : attirer le plus d’attention possible au moindre coût.

Lorsqu’un contenu est capable d’attirer suffisamment d’attention, et que le coût de maintien de ce contenu — en ce compris le coût économique, le coût de la réglementation et le coût de la lutte contre la réglementation — est faible, ce contenu a des chances de survivre sur Internet. Il est probable qu’il changera même de support — par exemple en passant du texte à l’image, de l’image fixe à l’image animée, de l’image animée à la vidéo et, à l’avenir, peut-être de la vidéo bidimensionnelle à la vidéo holographique tridimensionnelle, et ainsi de suite. La plateforme qui sert de véhicule à ce contenu changera également. On passera du portail au forum, aux blogs personnels, au microblogging — et à l’avenir peut-être à une plateforme dont nous ignorons tout pour l’instant.

Lorsqu’un contenu ne peut plus attirer suffisamment l’attention ou que le coût de maintenance de ce contenu est trop élevé, il disparaîtra d’Internet. La disparition collective de l’Internet traditionnel, avec des ordinateurs servant de terminaux de navigation et des pages web comme supports, est simplement le résultat inévitable de cette « concurrence évolutive pour l’information ».

La disparition collective de l’Internet traditionnel est simplement le résultat inévitable d’une « concurrence évolutive pour l’information ».

He Jiayan

Darwin nous a appris que la clef de l’évolution biologique était la « sélection naturelle, la survie du plus apte ». La clef de l’évolution du contenu d’Internet est la « concurrence de l’information, la sélection de l’attention ». En raison de l’effet de réseau, cette concurrence est dix mille fois plus féroce que dans la nature — dix mille fois plus cruelle. L’Internet traditionnel n’emportera pas l’extinction d’une seule espèce mais l’extinction de la quasi-totalité du contenu.

À chaque nouvelle génération d’Internet, l’ancienne génération, arrimée à une structure obsolète, s’effondrera. C’est la destinée de tous les sites web et de tous leurs contenus.

Si Internet est l’avenir de la civilisation, alors notre génération n’aura pas d’histoire — parce qu’Internet n’aura laissé de nous aucune trace.

Si Internet est l’avenir de la civilisation, alors notre génération n’aura pas d’histoire — parce qu’Internet n’aura laissé de nous aucune trace.

He Jiayan

« Pas d’histoire ». Est-ce si important ?

Bien sûr que oui.

Pour écrire un article sur Shao Yibo, j’ai essayé par tous les moyens de mettre la main sur la vidéo originale de la participation de Shao Yibo à l’émission « Boshi Tang » en 2007 ainsi que les posts de sa femme, Bao Jiaxin, sur le site Baby Tree, postés depuis quelques années sous le pseudonyme de « Wen Ai Mummy ». Je ne suis pas parvenu à les retrouver — et je ne peux que le regretter.

Bien que l’article « Red Dust Has Forgotten Shao Yibo » soit toujours très populaire — avec plus de 700 000 lecteurs et 20 000 retweets en seulement une semaine — je suis presque sûr que j’ai dû passer à côté d’informations très importantes et que la qualité de l’article aurait été meilleure si j’avais eu accès à de telles informations.

Vous vous dites peut-être : « cela n’est utile qu’aux chercheurs et aux rédacteurs comme He Jiayan, je n’écris pas d’articles de ce genre, et ce n’est pas comme si cela n’affectait. »

Vraiment ?

Si nous ne pouvons plus avoir accès à tous les discours de Jack Ma, tous les articles de Ren Zhengfei, My Father and Mother et The Spring River Flows East, et tous les messages de Duan Yongping dans Snowball, ne seriez-vous pas un peu triste ?

He Jiayan fait ici référence à des lieux communs de la culture populaire chinoise. Ren Zhengfei est le directeur général de Huawei, My Father and Mother est un film romantique sorti en 2013 et The Spring River Flows East un film datant de 1947 considéré comme un classique du cinéma chinois.

Vous me direz que vous êtes insensibles.

Alors, si nous ne pouvons plus chercher le numéro de Huang Zheng, si nous ne pouvons plus voir les messages de Zhang Yiming ou de Wang Xing, est-ce que vous n’éprouverez pas un peu de regret ?

Vous m’assurerez que vous ne vous sentez pas désolé non plus.

Si un jour, Zhihu disparaît comme Tianya Forum, Douban s’efface comme RenRen, B-site se vide comme Sina Blog — ne ressentirez-vous pas un peu de chagrin ?

Si un jour, les pages Internet de votre blogueur préféré affichent que « l’auteur a choisi de n’afficher que la moitié des posts de l’année » ou que « ce blog n’est plus visible », si vous lisez souvent que « ce compte a été bloqué », que « le contenu ne peut être affiché », si vous recherchez certaines informations dans Shake Voice ou Xiaohongshu, et que les résultats affichent que « l’auteur a effacé tout le contenu »…

Cela ne vous attristera-t-il pas — ne serait-ce qu’un instant ?

Les générations de l’Internet traditionnel, nées dans les années 1970 et 1980 ne peuvent plus retrouver leur histoire. Leurs traces en ont pratiquement disparu.

Les générations de l’Internet traditionnel, nées dans les années 1970 et 1980 ne peuvent plus retrouver leur histoire. Leurs traces en ont pratiquement disparu.

He Jiayan

La nouvelle génération peut encore garder les messages de cercles d’amis privés, mais même de cercle d’amis est, de plus en plus, « visible seulement trois jours ». Messages éphémères… — jusqu’à ce que tout s’efface.

La seule chose qui produit encore du contenu de manière frénétique, c’est le marketing en cascade.

Mais à l’avenir, il y a fort à parier que même ces messages marketing finiront par disparaître.

Si quelque chose est important pour nous et qu’il est en train de disparaître, existe-t-il un moyen de le sauver ?

Certains ont essayé de le faire.

Il existe aux États-Unis un site web appelé Internet Archive, qui se traduit en chinois par « 互联网档案馆 » et qui préserve un grand nombre de pages web originales. Mais pour avoir essayé de l’utiliser, les pages web originales en chinois sont très peu sauvegardées. L’utilisation en est très difficile, les fonctionnalités de recherche assez primitives et inefficaces. En définitive, quantitativement, elles n’ont pas permis de sauvegarder grand-chose.

D’un point de vue technique, il ne devrait pas être difficile de sauvegarder toutes les pages web de l’Internet chinois jusqu’à l’essor de l’Internet mobile au cours des dix dernières années. Et le coût n’en serait pas élevé. Après tout, si on la compare à l’ère actuelle de l’Internet, où la vidéo est hégémonique, cette ère faite de pages web au graphisme limité est négligeable en termes d’espace.

Si on la compare à l’ère actuelle de l’Internet, où la vidéo est hégémonique, cette ère faite de pages web au graphisme limité est négligeable en termes d’espace.

He Jiayan

La question est de savoir qui saura le faire, et mû par quoi.

Les entreprises ne le feront pas. Elles n’y auraient pas d’intérêt commercial.

Le gouvernement pourrait être en mesure de créer des archives qui conservent toutes les pages — tout comme il construit des bibliothèques et des musées. Mais dépenserait-il de l’argent pour cela ? Il semble qu’il n’y ait pas d’autre raison que de préserver l’histoire. Or même si le gouvernement le faisait, cela ne changerait rien pour les utilisateurs ordinaires d’Internet, car ces archives nécessiteraient un traitement immense concernant les données personnelles et ne seraient accessibles qu’à quelques-uns pour éviter précisément tout abus dans l’utilisation de ces données.

D’ailleurs, même si un organe quelconque était disposé à le faire, il serait désormais trop tard. Après l’essor de l’Internet mobile, selon une estimation approximative, plus de 99 % du contenu de l’Internet chinois traditionnel devrait avoir disparu.

D’une certaine manière, les articles que j’ai rédigés ont contribué à la préservation de l’histoire de leurs sujets. Si je n’avais pas écrit sur eux, une grande partie de cette histoire serait déjà introuvable en ligne. Pourtant il ne s’agit pas d’informations originales, mais seulement d’informations de seconde main que j’ai consolidées.

Après l’essor de l’Internet mobile, selon une estimation approximative, plus de 99 % du contenu de l’Internet chinois traditionnel devrait avoir disparu.

He Jiayan

Aujourd’hui, sur tous les événements majeurs qui se sont produits au cours de la première décennie de ce siècle, toutes les célébrités qui ont laissé des traces profondes, les informations que l’on peut encore trouver sur l’Internet chinois sont presque toujours des informations de seconde main éditées par des médias pure player — ou même des informations qui ont été maintes fois reprises et republiées et qui sont depuis longtemps complètement différentes de leur forme originale.

Les rapports originaux, les vidéos originales, les discours originaux, les observations originales des internautes, les commentaires originaux — tout cela a disparu.

Dans quelques années, toutes ces informations de seconde main auront également disparu. Tout se passe comme si ces événements n’avaient jamais eu lieu. Comme si ces personnes n’avaient jamais existé.

Il n’y a rien d’autre à faire que d’accepter la réalité.

À l’ère d’Internet, les vingt premières années du XXIe siècle seront vingt années sans archives historiques.

À l’ère d’Internet, les vingt premières années du XXIe siècle seront vingt années sans archives historiques.

He Jiayan

Si vous pouvez encore voir des informations anciennes de l’Internet chinois aujourd’hui, ce n’est que la dernière lueur du crépuscule.

Si vous êtes saisis par leur nature éphémère, vous pourrez soupirer comme Faust sur son lit de mort implorant l’instant : « Arrête-toi, tu es si beau ! »

Mais cette lueur sera bientôt engloutie par le temps et tombera dans le vide — en même temps que votre exclamation.

Il n’y a pas d’échappatoire.

Presque tout ce que vous voyez et créez maintenant — cet article, cette plateforme — finira par se noyer dans le vide.

Permalink
June 3, 2024 at 8:12:51 PM GMT+2

The 'Dead-Internet Theory' Is Wrong but Feels True - The Atlantichttps://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/

  • Conspiracy
  • Artificial intelligence
  • Conspiracy
  • Artificial intelligence

Maybe You Missed It, but the Internet ‘Died’ Five Years Ago

A conspiracy theory spreading online says the whole internet is now fake. It’s ridiculous, but possibly not that ridiculous?

By Kaitlyn Tiffany

If you search the phrase i hate texting on Twitter and scroll down, you will start to notice a pattern. An account with the handle @pixyIuvr and a glowing heart as a profile picture tweets, “i hate texting i just want to hold ur hand,” receiving 16,000 likes. An account with the handle @f41rygf and a pink orb as a profile picture tweets, “i hate texting just come live with me,” receiving nearly 33,000 likes. An account with the handle @itspureluv and a pink orb as a profile picture tweets, “i hate texting i just wanna kiss u,” receiving more than 48,000 likes.

There are slight changes to the verb choice and girlish username and color scheme, but the idea is the same each time: I’m a person with a crush in the age of smartphones, and isn’t that relatable? Yes, it sure is! But some people on Twitter have wondered whether these are really, truly, just people with crushes in the age of smartphones saying something relatable. They’ve pointed at them as possible evidence validating a wild idea called “dead-internet theory.”

Let me explain. Dead-internet theory suggests that the internet has been almost entirely taken over by artificial intelligence. Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls, and idly curious lovers of chitchat. One might, for example, point to @_capr1corn, a Twitter account with what looks like a blue orb with a pink spot in the middle as a profile picture. In the spring, the account tweeted “i hate texting come over and cuddle me,” and then “i hate texting i just wanna hug you,” and then “i hate texting just come live with me,” and then “i hate texting i just wanna kiss u,” which got 1,300 likes but didn’t perform as well as it did for @itspureluv. But unlike lots of other online conspiracy theories, this one has a morsel of truth to it. Person or bot: Does it really matter?

Read: The internet is mostly bots

Dead-internet theory. It’s terrifying, but I love it. I read about it on Agora Road’s Macintosh Cafe, an online forum with a pixelated-Margaritaville vibe and the self-awarded honor “Best Kept Secret of the Internet!” Right now, the background is a repeated image of palm trees, a hot-pink sunset, and some kind of liquor pouring into a rocks glass. The site is largely for discussing lo-fi hip-hop, which I don’t listen to, but it is also for discussing conspiracy theories, which I do.

In January, I stumbled across a new thread there titled “Dead Internet Theory: Most of the Internet is Fake,” shared by a user named IlluminatiPirate. Over the next few months, this would become the ur-text for those interested in the theory. The post is very long, and some of it is too confusing to bother with; the author claims to have pieced together the theory from ideas shared by anonymous users of 4chan’s paranormal section and another forum called Wizardchan, an online community premised on earning wisdom and magic through celibacy. (In an email, IlluminatiPirate, who is an operations supervisor for a logistics company in California, told me that he “truly believes” in the theory. I agreed not to identify him by name because he said he fears harassment.)

Peppered with casually offensive language, the post suggests that the internet died in 2016 or early 2017, and that now it is “empty and devoid of people,” as well as “entirely sterile.” Much of the “supposedly human-produced content” you see online was actually created using AI, IlluminatiPirate claims, and was propagated by bots, possibly aided by a group of “influencers” on the payroll of various corporations that are in cahoots with the government. The conspiring group’s intention is, of course, to control our thoughts and get us to purchase stuff.

As evidence, IlluminatiPirate offers, “I’ve seen the same threads, the same pics, and the same replies reposted over and over across the years.” He argues that all modern entertainment is generated and recommended by an algorithm; gestures at the existence of deepfakes, which suggest that anything at all may be an illusion; and links to a New York story from 2018 titled “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.” “I think it’s entirely obvious what I’m subtly suggesting here given this setup,” the post continues. “The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population.” So far, the original post has been viewed more than 73,000 times.

Read: Artificial intelligence is misreading human emotion

Obviously, the internet is not a government psyop, even though the Department of Defense had a role in its invention. But if it were, the most compelling evidence to me is the dead-internet theory’s observation that the same news items about unusual moon-related events seem to repeat year after year. I swear I’ve been saying this for years. What is a super flower blood moon? What is a pink supermoon? A quick search of headlines from just this month brings up: “There’s Something Special About This Weekend’s Moon,” “Don’t Miss: Rare, Seasonal ‘Blue Moon’ Rises Tonight,” and “Why This Weekend’s Blue Moon Is Extra Rare.” I just don’t understand why everyone is so invested in making me look at the moon all the time? Leave me alone about the moon!

Dead-internet theory is a niche idea because it’s patently ridiculous, but it has been spreading. Caroline Busta, the Berlin-based founder of the media platform New Models, recently referenced it in her contribution to an online group show organized by the KW Institute for Contemporary Art. “Of course a lot of that post is paranoid fantasy,” she told me. But the “overarching idea” seems right to her. The theory has become fodder for dramatic YouTube explainers, including one that summarizes the original post in Spanish and has been viewed nearly 260,000 times. Speculation about the theory’s validity has started appearing in the widely read Hacker News forum and among fans of the massively popular YouTube channel Linus Tech Tips. In a Reddit forum about the paranormal, the theory is discussed as a possible explanation for why threads about UFOs seem to be “hijacked” by bots so often.

The theory’s spread hasn’t been entirely organic. IlluminatiPirate has posted a link to his manifesto in several Reddit forums that discuss conspiracy theories, including the Joe Rogan subreddit, which has 709,000 subscribers. In the r/JoeRogan comments, users argue sarcastically—or sincerely?—about who among them is a bot. “I’m absolutely the type of loser who would get swindled into living among bots and never realize it,” a member of the 4chan-adjacent Something Awful forum commented when the theory was shared there in February. “Seems like something a bot would post,” someone replied. Even the playful arguments about how everything is the same are the same.

Read: Why is Joe Rogan so popular?

That particular conversation continued down the bleakest path imaginable, to the point of this comment: “If I was real I’m pretty sure I’d be out there living each day to the fullest and experiencing everything I possibly could with every given moment of the relatively infinitesimal amount of time I’ll exist for instead of posting on the internet about nonsense.”

Anyway … dead-internet theory is pretty far out-there. But unlike the internet’s many other conspiracy theorists, who are boring or really gullible or motivated by odd politics, the dead-internet people kind of have a point. In the New York story that IlluminatiPirate invokes, the writer Max Read plays with paranoia. “Everything that once seemed definitively and unquestionably real now seems slightly fake,” he writes. But he makes a solid argument: He notes that a majority of web traffic probably comes from bots, and that YouTube, for a time, had such high bot traffic that some employees feared “the Inversion”—the point when its systems would start to see bots as authentic and humans as inauthentic. He also points out that even engagement metrics on sites as big and powerful as Facebook have been grossly inflated or easily gamed, and that human presence can be mimicked with click farms or cheap bots.

Some of this may be improving now, for better or for worse. Social-media companies have gotten a lot better at preventing the purchase of fake views and fake likes, while some bot farmers have, in response, become all the more sophisticated. Major platforms still play whack-a-mole with inauthentic activity, so the average internet user has no way of knowing how much of what they see is “real.”

But more than that, the theory feels true: Most weeks, Twitter is taken over by an argument about how best to practice personal hygiene, or which cities have the worst food and air quality, which somehow devolves into allegations of classism and accusations of murder, which for whatever reason is actually not as offensive as classism anymore. A celebrity is sorry. A music video has broken the internet. A meme has gotten popular and then boring. “Bennifer Might Be Back On, and No One’s More Excited Than Twitter.” At this point, you could even say that the point of the theory is so obvious, it’s cliché—people talk about longing for the days of weird web design and personal sites and listservs all the time. Even Facebook employees say they miss the “old” internet. The big platforms do encourage their users to make the same conversations and arcs of feeling and cycles of outrage happen over and over, so much so that people may find themselves acting like bots, responding on impulse in predictable ways to things that were created, in all likelihood, to elicit that very response.

Thankfully, if all of this starts to bother you, you don’t have to rely on a wacky conspiracy theory for mental comfort. You can just look for evidence of life: The best proof I have that the internet isn’t dead is that I wandered onto some weird website and found an absurd rant about how the internet is so, so dead.

Permalink
May 31, 2024 at 10:31:44 AM GMT+2

Disrupting deceptive uses of AI by covert influence operations | OpenAIhttps://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/

  • Artificial intelligence
  • Psychology
  • PsyOps
  • Politics
  • War
  • Artificial intelligence
  • Psychology
  • PsyOps
  • Politics
  • War

Disrupting deceptive uses of AI by covert influence operations

We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services.

OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content. That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.

In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services.

This blog describes the threat actors we disrupted, attacker trends we identified, and important defensive trends - including how designing AI models with safety in mind in many cases prevented the threat actors from generating the content they desired, and how AI tools have made our own investigations more efficient. Alongside this blog, we are publishing a trend analysis that describes the behavior of these malicious actors in detail.

Read the full report(opens in a new window)

Threat actors work across the internet. So do we. By collaborating with industry, civil society, and government we tackle the creation, distribution, and impact of IO content. Our investigations and disruptions were made possible in part because there’s been so much detailed threat reporting over the years by distribution platforms and the open-source community. OpenAI is publishing these findings, as other tech companies do, to promote information sharing and best practices amongst the broader community of stakeholders.

Disruption of covert influence operations

Over the last three months, our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts.

Specifically, we disrupted:

  • A previously unreported operation from Russia, which we dubbed Bad Grammar, operating mainly on Telegram and targeting Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used our models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on Telegram.
  • An operation originating in Russia known as Doppelganger(opens in a new window). People acting on behalf of Doppelganger used our models to generate comments in English, French, German, Italian and Polish that were posted on X and 9GAG; translate and edit articles in English and French that were posted on websites linked to this operation; generate headlines; and convert news articles into Facebook posts.
  • A Chinese network known as Spamouflage(opens in a new window), which used our models to research public social media activity, generate texts in languages including Chinese, English, Japanese and Korean that were then posted across platforms including X, Medium and Blogspot, and debug code for managing databases and websites, including a previously unreported domain, revealscum[.]com.
  • An Iranian operation known as the International Union of Virtual Media(opens in a new window) (IUVM), which used our models to generate and translate long-form articles, headlines and website tags that were then published on a website linked to this Iranian threat actor, iuvmpress[.]co;
  • Activity by a commercial company in Israel called STOIC, because technically we disrupted the activity, not the company. We nicknamed this operation Zero Zeno, for the founder of the stoic school of philosophy. The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation.

The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.

So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services. Using Brookings’ Breakout Scale,(opens in a new window) which assesses the impact of covert IO on a scale from 1 (lowest) to 6 (highest), none of the five operations included in our case studies scored higher than a 2 (activity on multiple platforms, but no breakout into authentic communities).

Attacker trends

Based on the investigations into influence operations detailed in our report, and the work of the open-source community, we have identified the following trends in how covert influence operations have recently used artificial intelligence models like ours.

  • Content generation: All these threat actors used our services to generate text (and occasionally images) in greater volumes, and with fewer language errors than would have been possible for the human operators alone.
  • Mixing old and new: All of these operations used AI to some degree, but none used it exclusively. Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts or memes copied from across the internet.
  • Faking engagement: Some of the networks we disrupted used our services to help create the appearance of engagement across social media - for example, by generating replies to their own posts. This is distinct from attracting authentic engagement, which none of the networks we describe here managed to do to a meaningful degree.
  • Productivity gains: Many of the threat actors that we identified and disrupted used our services in an attempt to enhance productivity, such as summarizing social media posts or debugging code.

Defensive trends

While much of the public debate so far has focused on the potential or actual use of AI by attackers, it is important to remember the advantages that AI offers to defenders. Our investigations also benefit from industry sharing and open-source research.

  • Defensive design: We impose friction on threat actors through our safety systems, which reflect our approach to responsibly deploying AI. For example, we repeatedly observed cases where our models refused to generate the text or images that the actors asked for.
  • AI-enhanced investigation: Similar to our approach to using GPT-4 for content moderation and cyber defense, we have built our own AI-powered tools to make our detection and analysis more effective. The investigations described in the accompanying report took days, rather than weeks or months, thanks to our tooling. As our models improve, we’ll continue leveraging their capabilities to improve our investigations too.
  • Distribution matters: Like traditional forms of content, AI-generated material must be distributed if it is to reach an audience. The IO posted across a wide range of different platforms, including X, Telegram, Facebook, Medium, Blogspot, and smaller forums, but none managed to engage a substantial audience.
  • Importance of industry sharing: To increase the impact of our disruptions on these actors, we have shared detailed threat indicators with industry peers. Our own investigations benefited from years of open-source analysis conducted by the wider research community.
  • The human element: AI can change the toolkit that human operators use, but it does not change the operators themselves. Our investigations showed that these actors were as prone to human error as previous generations have been - for example, publishing refusal messages from our models on social media and their websites. While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.

We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use. Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.

Permalink
May 31, 2024 at 10:28:35 AM GMT+2

Cyber Security: A Pre-War Reality Check - Bert Hubert's writingshttps://berthub.eu/articles/posts/cyber-security-pre-war-reality-check/

  • Cybersecurity
  • War
  • Spying
  • Cybersecurity
  • War
  • Spying

Cyber Security: A Pre-War Reality Check

Posted on May 14 2024

This is a lightly edited transcript of my presentation today at the ACCSS/NCSC/Surf seminar ‘Cyber Security and Society’. I want to thank the organizers for inviting me to their conference & giving me a great opportunity to talk about something I worry about a lot. Here are the original [slides with notes](https://berthub.eu/prewar/ncsc accss surf keynote bert hubert-notes.pdf), which may be useful to view together with the text below. In the notes there are also additional URLs that back up the claims I make in what follows.

So, well, thank you so much for showing up.

And I’m terribly sorry that it’s not going to be a happy presentation.

This is also sort of an academic environment, and this is not going to be an academic talk. This is not going to be subtle. But I’m trying to alter, to modulate your opinion on the world of cyber security a little bit.

Cyber security and society, a pre-war reality check

We’re already worried enough about cyber security. Is anyone here not worried about cyber security? And you could go home now, otherwise. Okay, that’s good. So you can all stay.

First, some important words from Donald T:

“I know it sounds devastating, but you have to get used to the fact that a new era has begun. The pre-war era.”

And this comes from Donald Tusk, the Polish Prime Minister from 2007 to 2014.

And at the time, he, and the Baltic states, said that Russia was a real threat. And everyone’s like, yeah, yeah, it’ll last. And we’ll just do so much business with them that we will not get bombed. And that did not work.

And now Donald Tusk is again the Prime Minister of Poland. And he’s again telling us that, look, we are in a bad era and we are underestimating this.

We are used to thinking about cyber security in terms of can we keep our secrets safe? Are we safe against hackers or ransomware or other stuff? But there is also a war dimension to this. And this is what I want to talk about here.

So briefly, Nicole already mentioned it, I’ve done a lot of different things, and this has given me varied insights into security. I’ve worked with Fox-IT for a long while. PowerDNS should not be a well-known company. But it delivered services to KPN, Ziggo, British Telecom, Deutsche Telekom. And they all run their internet through the PowerDNS software.

And through that, I got a lot of exposure to how do you keep a national telecommunications company secure.

And can the national telecommunications companies keep themselves secure?

And that was useful.

I spent time at intelligence agencies, I spent time regulating intelligence agencies. And that may be also useful to talk about a little bit. Through that regulatory body, for nearly two years, I got a very good insight into every cyber operation that the Dutch government did. Or every cyber operation that was done on the Dutch government.

I cannot tell you anything about that stuff. But it was really good calibration. You know what kind of stuff is going on. Uniquely to the Netherlands is that this board, which regulates the intelligence agencies, actually has two judges, the little guy on the left and on the right:

And in the middle, there was someone with different experience. That’s what the law says. They couldn’t get themselves to say someone with technical experience. It was a bridge too far. But at least they said we can have someone with different experience.

And this is unique in Europe, that there is an intelligence agency that is being regulated with an actual technical person in there. And we’ll come to why that is important later.

So everyone is of course saying, look, the cyber security world is just terrible and doomed. And someone is going to shut off our electricity and kill our internet and whatever. Or disable a hospital. And so far, not a lot of this stuff has actually been happening.

In 2013, Brenno de Winter wrote a book called The Digital Storm Surge, in which he said, look, we haven’t had any real cyber incidents that really speak to the imagination. So we’ve had, of course, the printer is down. The printer is always down.

We don’t actually rely on computers that much, because they break all the time. So we do not have a lot of life and death situations involving computers.

Brenno, in 2013, predicted that we would only take cyber security seriously once we had the kind of incident where lots of self-driving cars, who can avoid pedestrians, that you flip one bit. And they all start aiming at pedestrians.

And you get like thousands of people dead because all kinds of cars decide to drive over people. And he mentioned there that before the sinking of the Titanic, there was no regulation for how to build ships.

So you could just build a ship and if it looked good, people assumed that it would also be good. And only after the Titanic, they started saying, oh, we need to have steel that’s this thick, and you can have the steam engine, not here, it must be there. So he said the Titanic was the regulatory event for ship building.

And in 2013, Brenno said we have not had anything serious yet, and we will only get serious legislation once the Titanic sinks. And it didn’t sink.

However, the EU got vision.

his is the most optimistic slide in the whole presentation.

For some reason, the EU decided that this couldn’t go on. And so they launched like six or seven new laws to improve the state of our cybersecurity.

And this is like the sinking of the Titanic. So we’re not properly realizing how much work this is going to be. Because the thing is, they’ve written all these laws already, and only one of them is sort of semi-active right now, and the rest is still coming.

So this is our post-Titanic environment, and this might improve the situation of cybersecurity somewhat. Because it’s like terrible.

So some real cyber incidents, real stuff that broke war.

This is the former president of Iran, Mahmoud Ahmadinejad. And here he is admiring his uranium ultracentrifuge facilities.

And this was the famous Stuxnet operation, where apparently the West was able to disable the ultracentrifuges used to create highly enriched uranium.

And not only did they disable it, like the factory is down now, it tore itself to shreds physically.

So this is one of the few sort of military cyber attacks that we know about.

This is like one third of them. The other one that happened was just before Russia invaded Ukraine, they managed to disable the Viasat modems. And this is an interesting case. These modems are used for satellite communications. And they were able to attack these modems so that they physically disabled themselves.

It was not like the denial of service attack on the network. No, they managed to wipe the firmware of all these modems in such a way that it could not be replaced.

The reason we know about this stuff so well is it turns out there were lots of windmills that also had these modems.

In Germany, apparently 4,000 of these modems stopped working. And there were 4,000 wind turbines that could no longer be operated. So this was a military cyber attack that happened as Russia was invading Ukraine. And it was of great benefit to them because it disabled a lot of military communications in Ukraine.

But this is the kind of thing that can happen, only that it’s quite rare.

Earlier, Russia disabled a lot of the electricity networks in Ukraine using a similar kind of attack. And it turned out that the Ukrainians were so good (and their systems so simple and robust) that they had a disruption of like only six hours, which is really impressive.

And I want you to imagine already what would happen if we had such an attack on a Dutch power company. They’re very nimble [irony, they are not]. I mean, try asking a question about your invoice.

So I’m going to talk about rough times. And I started my presentation with Donald Tusk telling us we are in a pre-war era, and I truly believe that. But it’s a difficult thing to believe. I also do not want to believe it. I also want to be like, no, this stuff is over there in Ukraine. It’s not here. But even if you think there’s only a 10% chance, then it’s quite good to already think about this kind of stuff.

Even if you are such a diehard pacifist that you are convinced that it’s never going to happen, you can just imagine that I’m talking about robustness in the face of climate change.

Because also then you want to have your stuff that works.

So there are three things I identified, that you really care about in a war, in a chaotic situation where there’s no power.

You want infrastructure that is robust, that does not by itself fall over.

If we look at modern communications tools, like, for example, Microsoft 365, that falls over like one or two days a year without being attacked. It just by itself already falls over. That’s not a robust infrastructure in that sense.

Limited and known dependencies.

Does your stuff need computers working 5,000 kilometers away? Does your stuff need people working on your product 5,000 kilometers away that you might no longer be able to reach?

So, for example, if you have a telecommunications company and it’s full of telecommunications equipment and it’s being maintained from 5,000 kilometers away, if something goes wrong, you better hope that the connection to the people 5,000 kilometers away is still working, because otherwise they cannot help you.

The third one, when things go wrong, you must be able to improvise and fix things. Truly own and understand technology.

For example, you might not have the exact right cable for stuff, and have to put in an unofficial one.

You might have to fix the firmware yourself. You must really know what your infrastructure looks like.

Let’s take a look at these three aspects of modern communications methods. And we’re going to start with one of my very favorite machines, and I hope you will love this machine as much as I do.

This is the sound-powered phone. So a sound-powered phone is literally what it is. It’s a piece of metal. It probably has, like, five components in there. And out comes a wire. Even the wire is actually in some kind of steel tube. And this thing allows you to make phone calls without electricity.

So if your ship is on fire, and you need to call to the deck and say, “Hey, the ship is on fire,” this thing will actually work, unlike your voice-over-IP setup, which, after the first strike on your ship, and there’s been a power dip, and all the servers are rebooting, this thing will always work.

If you try to break it, you could probably strike it with a hammer. It will still work. It’s very difficult to disable this machine. Attempts have been made to disable it, because it’s so ridiculously simple that people think we must make progress, and we must have digital phones. And, well, this machine is still going strong. And people have tried to replace it, but in war-fighting conditions, this is the kind of machine that you need. This one can make calls to ten different stations, by the way. It’s even quite advanced. And they can make phone calls over cables that are 50 kilometers long. So it’s a very impressive machine.

And now we’re going to head to some less impressive things.

This was the Dutch Emergency Communication Network (Mini-noodnet). There is not much known about this Emergency Communication Network, although Paul might know a few things. [Paul confirms that he does] Because a lot of this stuff is sort of semi-classified, and they’re not really telling anyone about it.

But this was a copper wire network through 20 bunkers in the Netherlands, which was independent completely from the regular telephone network. It was a very simple telephone network, but it was supposed to survive war and disasters. And it had these 20 bunkers. It had guys like this guy running it. And it was fully redundant. You can see that because the top rack has B on it, and the other one has A on it. It was actually fully redundant. It was really nice stuff.

And of course, we shut it down.

Because it’s old stuff, and we need to have modern stuff. And it’s very sad. Because it has now been replaced by this:

They tried to sort of renew this emergency telephone network, but no one could do it anymore. And then they said, “Look, we’re just going to ask KPN.” And we have DSL modems, and we use the KPN VPN service. And this (the Noodcommunicatievoorziening) is now supposed to survive major incidents.

And of course, it will not.

Because every call that you make through this emergency network passes through all of KPN, like 20 different routers. And if something breaks, then this is likely the first thing that will break.

During a power outage a few years ago, there was an attempt to use the system, and it turned out that didn’t work. Because the power was out. Yeah, it’s embarrassing, but that’s what happened.

So we’ve made the trip from this wonderful thing to this pretty impressive thing to this thing. And then we have Microsoft Teams. Which is a very…

I know there are Microsoft people in the room, and I love them. When it works, it’s great. I mean, it exhausts the battery of my laptop in 20 minutes, but it’s very impressive.

And you have to realize that it works like almost always. Maybe not always audio and stuff, but quite often it will work.

So we’ve made this trip from here (sound powered phone) to here (Teams). And that’s not good. And I want to show you, (big WhatsApp logo). This is the actual Dutch government emergency network.

Which is interesting in itself, because it’s actually sort of really good at these short text-based messages. So if you want to have a modern emergency network, it could look a lot like WhatsApp. In terms of concept. Except that we should not have chosen the actual WhatsApp to do this stuff.

Because if the cable to the US is down, I can guarantee you WhatsApp is also down. So this is an emergency network that is itself not super redundant. But it’s very popular in times of disaster.

We know this because after a disaster, people do an investigation to figure out how did the communications go. And you have all these screenshots of these WhatsApp groups. So I’m not knocking it because it actually works.

Unlike this thing (the modern Voip NCV). It’s not that expensive though. They just renewed it. It’s like six million euros a year. It’s not bad.

So how bad is losing communications? The Dutch road management people (Rijkswaterstaat) have a very good Mastodon account and also a Twitter account, I assume.

Where they will almost every day tell you, look, there’s a bridge, and it won’t close. And then they say, and I find this fascinating, they say, yeah, we called the engineer. So it says here, de monteur. We called de monteur.

It is like they have one of these guys who sits there with a van, and they’re waiting for a call,

I assume they have multiple ones.

But still, you could disrupt all of the Netherlands if you just put the bridges open. So if you have any kind of war kind of situation, you’re trying to mobilize, you’re trying to get the tanks from A to B, apparently you can just shut down the bridge.

And it happens a lot. And then you need to reach the engineer. But you have to use a phone to do that. Because I assume that this engineer sits there waiting until the phone rings. And let’s say the phone does not ring, because the phone network is down, then your bridge stays open.

But also you have to find the phone number of the engineer, of course, and that might well be hiding out in an Excel sheet in your cloud environment. So that means that the effective chain to get this bridge fixed, the bridge fixed in 2024, likely includes a completely working cloud environment and a phone environment, and then hoping that the guy with the van manages to get there, and that he does not have an electric van, which also needs a cloud to drive.

The picture on the left is, of course, well known. It’s used to illustrate that all the world of digital infrastructure often depends on just one person, which is bad enough.

But actually my thesis is this entire stack is way too high.

So if you want to run a modern society, we need all the power to be on everywhere. We need the cables to the US to be working. We need the cloud to be working. We need the phone to be working.

That’s a far cry from this lovely machine (the sound powered phone), which always works.

So I’m a bit worried that if we have panic, if we have flooding or an invasion or an attack or whatever.

I think that our infrastructure will not hold up.

I also want to mention this one. This is the Botlek Bridge. This is a modern bridge. And this bridge has failed 250 times. And in its initial years, it would fail like 75 times a year.

And when this fails, then the consequences are huge because it’s the one way that truck traffic can get from A to B. And it has failed in total hundreds of times. And for years, no one could figure out why.

So it would just block. It would no longer go up and down. And a whole task force, they took one of the engineers and they put them in a van over there. And they made them live there. They had live-in engineers here to just work on this thing if it broke. And through that work, they managed to sort of halve the downtime of this bridge.

It has its own website, this bridge, to keep track of the outages. And it has its own SMS service where it will send you text messages if it is broken (“Sms ‘BBRUG AAN’ naar 3669”, not kidding).

Because it was broken that much. Then after many years, they found out how that happened. And the story was, there is a system in there that manages the state, the sensors, and that server had a rotten ethernet cable or port.

And during that two-year period, everyone thought, it cannot be the computer. No one came and said, shall we just replace all the cables and ethernet ports for once and see what happens? We lacked the expertise.

And this is the third component I mentioned in the things that you really care about. Do you have sufficient ownership and knowledge of your own infrastructure that you can repair it?

And here, that apparently took more than three years. Maybe they just solved it by accident because someone needed that cable for their other computer.

I don’t know. But it’s super embarrassing. This is a sign that you do not have control over your own infrastructure.

That you have a major bridge and for three years long, you do not manage to find out what is wrong with it. And I worry about that.

Now it’s time for a little bit of good news. This is another big infrastructure project in the Netherlands. It’s the Maeslantkering.

And it protects us against high water. It’s a marvelous thing. It’s very near my house. Sometimes I just go there to look at it because I appreciate it so much. This machine is, again, this is the sound-powered phone infrastructure.

So you see here these two red engines that are used to push the thing close. That’s literally all they do. They only push it close. And when I visited, they said that actually, even if these engines didn’t work, they had another way of pushing it close. Because you actually need to close it when the water is really high.

And it doesn’t even need to close completely. It’s a completely passive thing. It has no sensors. So this shows that it could also be done. You can make simple infrastructure, and this is actually one of the pieces that works. They tried to mess it up by giving people some kind of weird, newly-Dutch-invented computer in here, which turned out to be bullshit. But that only takes the decision if it should close or not.

It’s a very lovely machine. So I would love to see more of this. I’d love to see more of this and less of this (Botlek bridge). Even though the pictures are marvelous.

So where are we actually with the cybersecurity? How are things going? Could we stand up to the Russian hackers? Not really.

Four years ago, we had this big discussion about 5G and if we should use Chinese infrastructure for our 5G telephony.

And everyone talking about that, politicians, thought that was a big choice that had to be made then.

And the reality was, when this decision was being taken, the Chinese were literally running all our telecommunications equipment already. But that is such an unhappy situation that people were like, “La, la, la, la, la.”

They were pretending that up to then, we were in control of our telecommunications infrastructure and we were now deciding to maybe use Chinese equipment. And that maybe that Chinese equipment could backdoor us.

But the reality was (and still partially is), they were actually running our infrastructure. If they wanted to harm us, the only thing they had to do was to stop showing up for work.

And this is still a very inconvenient truth. So I wrote this like four years ago, and it got read at the European Commission. Everyone read it. And people asked me, even very senior telco people, they said, “No, it’s not true.” And so I asked them, “So where are your maintenance people then?” So you can go to, for example, kpn.com and their job vacancies. And you will see that they never list a job vacancy that has anything to do with 5G. Because they are not running it.

And if we realized earlier that in a previous century, we had 20 bunkers with our own independent telecommunications infrastructure, because we realized that telecommunications was like really important. And now we have said, “No, it’s actually fine.” It’s being run straight from Beijing. That’s a bit of a change.

So things are not good. People want to fix this, and they are making moves to fix the situation, but we aren’t there yet.

Google, Microsoft, AWS

So these are our new overlords. This is the cloud. This is the big cloud. This is apparently, according to Dutch government and semi-government agencies, these are the only people still able to do IT.

We had a recent situtation in the Netherlands where the maintainers of .nl, and I know you’re here, decided that no one in Europe could run the IT infrastructure the way they wanted it anymore, and that they had to move it very far away.

At this point, I want to clarify, some very fine people are working here (in the cloud) I’m not saying here that these are all terrible people. I AM saying there are many thousands of kilometers away, and may not be there for us in a very bad situation.

But apparently this is the future of all our IT. And I’ve had many talks in the past few weeks on this subject, and everyone in industry is convinced that you can no longer do anything without these three companies.

And that leads to this depressing world map, where we are in the middle, and we sort of get our clouds from the left, and the people maintaining that come from the right.

And we make cheese, I think. Really good cheese. And art. And handbags. Actually, one of the biggest Dutch companies, or European companies, is a handbag company. Very excellent. Louis Vuitton. It’s apparently a Dutch company. I didn’t know that either, but for tax reasons. We’re very good at tax evasion here, by the way.

And interestingly, it’s good to look at this exciting arrow here, because we see a lot of telecommunications companies are now moving to Ericsson and Nokia equipment, which is great.

But the maintenance on your Ericsson equipment is not done by a guy called Sven.

The maintenance is actually coming from the fine people from far away. These are actually maintaining our infrastructure.

The problem is they’re very far away. The other problem is that both China, where a lot of the infrastructure actually still comes from, and India, are very closely aligned to Russia.

So we have effectively said, we’ve outsourced all our telecommunications stuff, so this is where the servers are being operated from, and these are the people that are actually maintaining the servers. And all of these places are geopolitically worrying right now, because we don’t know who wins the elections. It could be a weird guy.

And both India and China are like, “Oh, we love Russia.” How much fun would it be if our telcos were being attacked by Russian hackers, and we hope that Infosys is going to come to our rescue?

They might be busy. They could well have other important things to do.

In any other case, we are not going to save our own telecommunications companies, because we are not running them ourselves.

Oh, again, to cheer you up a little bit. This is a map of Europe, and this is within this small area. This is where all the high-tech chip-making equipment in the whole world gets made. It is not that we are completely helpless. I just said we were very good with cheese. Actually, we’re also very good with high-end optics and making chip making equipment and stuff. So it’s not that we’re completely helpless. It’s just that we’ve chosen to focus on handbags and extreme UV optics, and not running our own vital infrastructure.

So what’s the situation? Joost Schellevis, he’s a Dutch journalist, and he recently decided on a weekend to just scan the Dutch Internet to see if he could find anything broken with it. And within a weekend of work, he found 10,000 places that were just open for hackers. And this turned into a news item on the Dutch national news, and people said, “Yeah, yeah, yeah, that’s how it is.” That’s not the sort of war-like situation, that if a random journalist – and Joost is very good – but if a random journalist can just sit there in a weekend and find 10,000 places he can hack, things are not good.

I know the NCSC and other places are working on it and improving it, and they can now scan for such weaknesses. But until quite recently, journalists could scan for these things, and the Dutch government could not, because of legal reasons.

So it’s not good. The other thing I want to focus – and that’s really worrying – if we want to improve our security, it would be nice if we could tell companies, “You just need to install the right equipment. Just get good equipment, and you will be secure.” And that’s not the world we’re living in right now.

And all these places are not secure right now. So if you tell people, “Get a good firewall,” I currently have no advice for you, because all the “good ones” are actually not good. Most big security vendors right now are delivering terribly insecure products, with hundreds of issues per year.

You could not really recommend this based on just the statistics. Yet we are still doing it, because that’s the stuff that we used to buy. Again, this is a peacetime choice. In peacetime, you say, “Hey, I buy this stuff because it’s certified, because we bought it last year, and it was fine then, too.” Well, actually, it was not fine then, too, but we just – and we just keep on buying shitty stuff.

And we get away with this for now. But Ukraine does not get away with this,

And just for your calibration, we are sort of – we are no longer really impressed by it, but if you look at the weekly or monthly security updates that come to us from the big security vendors, they just go out, “Yeah, we have 441 new security problems for you this month. “And there’s Oracle, and then there’s Microsoft. “Yeah, we have 150.” And this repeats sort of every month. And I’m not going to pick on Microsoft or Oracle specifically, but it is – we’ve sort of assumed that it’s okay if you just say, “Yeah, we have 1,000 new security vulnerabilities to deal with every month from our different vendors.” We cannot have this and assume that things will be good. Yet that is what we do.

And I love this one. So you might think that, look, the hackers have become really good, really advanced. That’s why we keep finding all these security issues. And it turns out that’s not the case.

The security issues that are being found are still extremely basic. So this is, for example, help desk software that people use so that the help desk can take over your computer and stuff. And it turns out that if you connected to this appliance and you added one additional slash at the end of the URL, it would welcome you as a new administrator, allowing you to reset the password.

And this is not even – I mean, this is par for the course, because, for example, here we have GitLab, which people use to securely store their source code because they don’t want to put it on the public Internet, so they put it on their own Internet. And it has a “forgot your password” link. And it turns out that if you provide it with two email addresses and you click on “forget your password,” it will send a reset link to the second email address.

But it checked only the first email address to see if you were really the administrator. And this was in GitLab for like six months.

Many of the recent security incidents are of this level. There are, of course, very advanced attacks as well, but quite a lot of this stuff is childishly simple things.

Ivanti, if you work for the Dutch government, you will very frequently see this screen when you log in. The U.S. government has disallowed the use of this software. They have said, “You can no longer use this software.” And the Dutch government says, “Well, we put another firewall in front of it, and it’s good now.”

You can see that above in the circle. This is the elite hacking technique. Dot, dot, slash. And it still works, 2024.

So the situation is not good.

So let’s move to the cloud and fix all these things.

Again, I want to apologize to the Microsoft people because I should have diversified my hate a little bit.

Microsoft said, “Yeah, it seems that we’ve been sort of compromised, but we’re on top of it.”

And then after a while, they said, “Well, yeah, actually…”

The one fun thing, if you really want to know how it is with the security of a company, you should go to their stock exchange information because there you have to admit all your problems. And if you do not admit your problems there, the board of directors goes to jail, which makes them remarkably honest. It’s very good. If you read this from most vendors, you just cry because it’s like, “Yeah, we know. Basically everything we do is broken,” it says there. Here at the Microsoft one, Microsoft says, “Yeah, turns out when we sort of looked again, we were sort of still hacked.”

Oh, okay.

And then came the Cyber Safety Review Board in the US, which has awesome powers to investigate cyber incidents, and you really must read this report.

Microsoft is actually a member of this board, which is what makes it interesting that they were still doing a very good investigation. And they said, “Yeah, it’s actually sort of… We’re full of Chinese hackers, and we’re working on it. Work in progress.”

So if you just say, “Let’s just move to the cloud,” your life is also not suddenly secure.

That’s what I’m saying.

And meanwhile, we have decided in Europe to move everything to these clouds. The Dutch government has just managed to come up with a statement that they said that there are a few things that they will not move to the cloud. And these are the classified things and the basic government registrations.

So that’s the kind of thing that if you add something to the basic registration, you can create people.

And they said, “That’s not going to the cloud.” But basically, everything else is on the table. And we have no choice with that really anymore, because what happens, if you used to run your own applications, if you used to run your own IT infrastructure, and then you say, “We’re going to move everything to the cloud,” what happens to the people that were running your IT infrastructure? They leave. You often don’t even have to fire them, because their work gets so boring that they leave by themselves.

And that means that you end up with organizations that have started moving all the things to the cloud.

And now, if you don’t pay very close attention, you will end up with no one left that really knows what’s going on. And that means that you have to actively say:

“Okay, we know that we’re going to outsource almost everything, but we’re going to retain this limited number of staff, and we’re going to treat them really well, so that we at least, in theory, still know what is going on.”

This is not happening. So the good technical people are leaving everywhere. They actually often start working for one of these clouds, at which points they’re out of reach, because you never hear from Amazon how they do things.

This is a something we are messing up, And this is making us incredibly vulnerable, because we now have these important places that have no one left that really knows what the computer is doing.

Belle, in her opening, she mentioned, “How could you be a manager of a subject that you don’t know anything about?” And I think that it’s very good that you mentioned that, because in many other places, this is apparently not a problem.

So you could be the director of whatever cloud strategy, and you’re like, “Hey, I studied law.” And of course, it’s good that you study law, but it’s good also to realize it might be nice if you have a few people on the board that actually know what a computer does.

And this is one of the main reasons why this is happening. Our decision-making in Europe, but especially in The Netherlands, is incredibly non-technical.

So you can have a whole board full of people that studied history and art and French, and they sit there making our cloud decisions. And they simply don’t know.

And if there had been more nerds in that room, some of these things would not have happened. And that is also a call to maybe us nerds, although you don’t really look that nerdy, but do join those meetings.

Because quite often, we as technical people, we’re like, “Ah, these meetings are an interruption of my work, and I’m not joining that meeting.” And while you were not there, the company decided to outsource everything to India.

And again, there’s nothing against India, but it’s very far away.

This stuff cannot go on like this. This is a trend, a trend where we know ever less about what we are doing, where we are ever more reliant on people very far away.

The trend has already gone too far, but it’s showing no sign of stopping. It is only getting worse.

And this is my worst nightmare.

Ukraine was already at war for two years and battle-hardened. So anything that was simple to break was already broken by the Russians. Then after two years, the Russians managed to break Kyivstar, one of the biggest telecommunications companies of Ukraine, This was a very destructive attack. But the Ukrainians (in and outside Kyivstar) are good enough that in two days they were back up and running, because these people were prepared for chaos.

They knew how to restore their systems from scratch. If we get an attack like this on VodafoneZiggo or on Odido, and they don’t get external help, they will be down for half a year, because they don’t know anything about their own systems.

And I’m super worried about that, because we are sitting ducks. And we’re fine with that.

So just a reminder, when times are bad, you are much more on your own, and no one has time for you.

If something goes wrong, remember the corona crisis when we couldn’t make these personal protective equipment, these face masks.

We couldn’t make them. And we had to beg people in China if they please had time to make a few for us. Can you imagine in a war situation that we have to beg India to please, or in a different situation where we have to beg the Donald Trump administration, if they would please, please fix our cloud issues here?

It’s a worrying thought, being that dependent. And we’re not good on any of these fronts right now.

So we’re rounding off. Is there a way back? Can we fix it?

And I made a little attempt myself.

I needed to share images with people, and I did not want to use the cloud, so I wanted to have an image sharing site. And I found out that the modern image sharing site, like Imgur, is five million lines of code and complexity.

That means it’s exceptionally vulnerable, because those five million lines will have a lot of vulnerabilities.

But then I decided, I wrote my own solution, a thing of 1,600 lines of code, which is, yeah, it’s like thousands of times less than the competition.

And it works. It’s very popular. The IEEE picked it up. They even printed it in their paper magazine. I got 100 emails from people saying that it’s so nice that someone wrote a small piece of software that is robust, does not have dependencies, you know how it works.

But the depressing thing is, some of the security people in the field, they thought it was a lovely challenge to audit my 1,600 lines of code. And they were very welcome to do that, of course. And they found three major vulnerabilities in there.

Even though I know what I’m doing. I’m sort of supposed to be good at this stuff. And apparently, I was good at this stuff because I invited them to check it. And they found three major issues. And it makes me happy that you can still make this small, robust code. But it was depressing for me to see that even in 1,600 lines, you can hide three serious security vulnerabilities.

What do you think about 5 million lines? That’s basically insecure forever. So this was a little attempt to fight my way back. And at least many people agreed with me. That’s the most positive thing I can say about that.

But in summary, the systems that support our daily lives are way too complex and fragile. They fail by themselves.

So when a big telco has an outage, it is now always a question, is this a cyber thing or is it just an incompetence thing? It could both be true.

Maintenance of our technology is moving further and further away from us.

So if you look at the vacancies, the job vacancies, telecommunications companies, they’re not hiring anything, anyone that does anything with radio networks.

Our own skills are wilting. We are no longer able to control our own infrastructure. We need help from around the world to just keep the communications working.

And that is the current situation. But now imagine this in wartime, it’s all terrible.

Why did it happen? Non-technical people have made choices and have optimized for stuff being cheap. Or at least not hassle. And that’s only going to be fixed if we have more technical thinking going on.

But I have no solutions for making that happen.

And with that, I’m afraid I have no more slides to cheer you up, and I want to thank you very much for your attention.

Permalink
May 19, 2024 at 12:24:12 AM GMT+2

Pivot to AI: Pay no attention to the man behind the curtain – Amy Castorhttps://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

  • Artificial intelligence
  • Crypto Currency
  • Enshitification
  • Artificial intelligence
  • Crypto Currency
  • Enshitification

Pivot to AI: Pay no attention to the man behind the curtain

hal9000

By Amy Castor and David Gerard the September 12, 2023 for AmyCastor

“all this talk of AI xrisk has the stink of marketing too. Ronald McDonald telling people that he has a bunker in New Zealand because the new burger they’re developing in R&D might be so delicious society will crumble.”

— Chris Martin

Crypto’s being dull again — but thankfully, AI has been dull too. The shine is coming off. So we’re back on the AI beat.

The AI winter will be privatized

Since the buzzword “artificial intelligence” was coined in the 1950s, AI has gone through several boom and bust cycles.

A new technological approach looks interesting and gets a few results. It gets ridiculously hyped up and lands funding. The tech turns out to be not so great, so the funding gets cut. The down cycles are called AI winters.

Past AI booms were funded mainly by the US Department of Defense. But the current AI boom has been almost completely funded by venture capital.

The VCs who spent 2021 and 2022 pouring money into crypto startups are pivoting to AI startups, because people buy the idea that AI will change the world. In the first half of 2023, VCs invested more than $40 billion into AI startups, and $11 billion just in May 2023. This is even as overall VC funding for startups dropped by half in the same period from the year before. [Reuters; Washington Post]

The entire NASDAQ is being propped up by AI. It’s one of the only fields that is still hiring.

In contrast, the DOD only requested $1.8 billion for AI funding in its 2024 budget. [DefenseScoop]

So why are VCs pouring money into AI?

Venture capital is professional gambling. VCs are looking for a liquidity event. One big winner can pay for a lot of failures.

Finding someone to buy a startup you’ve funded takes marketing and hype. The company doing anything useful, or anything that even works, is optional.

What’s the exit plan for AI VCs? Where’s the liquidity event? Do they just hope the startups they fund will do an initial public offering or just get acquired by a tech giant before the market realizes AI is running out of steam?

We’re largely talking about startups whose business model is sending queries to OpenAI.

At least with “Web3,” the VCs would just dump altcoins on retail investors via their very good friends at Coinbase. But with AI, we can’t see an obvious exit strategy beyond finding a greater fool.

Pay no attention to the man behind the curtain

The magical claim of machine learning is that if you give the computer data, the computer will work out the relations in the data all by itself. Amazing!

In practice, everything in machine learning is incredibly hand-tweaked. Before AI can find patterns in data, all that data has to be tagged, and output that might embarrass the company needs to be filtered.

Commercial AI runs on underpaid workers in English-speaking countries in Africa creating new training data and better responses to queries. It’s a painstaking and laborious process that doesn’t get talked about nearly enough.

The workers do individual disconnected actions all day, every day — so called “tasks” — working for companies like Remotasks, a subsidiary of Scale AI, and doing a huge amount of the work behind OpenAI.

AI doesn’t remove human effort. It just makes it much more alienated.

There’s an obvious hack here. If you are an AI task worker, your goal is to get paid as much as possible without too much effort. So why not use some of the well-known tools for this sort of job? [New York]

Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.

Remember, the important AI use case is getting venture capital funding. Why buy or rent expensive computing when you can just pay people in poor countries to fake it? Many “AI” systems are just a fancier version of the original Mechanical Turk.

Facebook’s M from 2017 was an imitation of Apple’s Siri virtual assistant. The trick was that hard queries would be punted to a human. Over 70% of queries ended up being answered by a human pretending to be the bot. M was shut down a year after launch.

Kaedim is a startup that claims to turn two-dimensional sketches into 3-D models using “machine learning.” The work is actually done entirely by human modelers getting paid $1-$4 per 15-minute job. But then, the founder, Konstantina Psoma, was a Forbes 30 Under 30. [404 Media; Forbes]

The LLM is for spam

OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming. [Vox]

The use case for AI is spam web pages filled with ads. Google considers LLM-based ad landing pages to be spam, but seems unable or unwilling to detect and penalize it. [MIT Technology Review; The Verge]

The use case for AI is spam books on Amazon Kindle. Most are “free” Kindle Unlimited titles earning money through subscriber pageviews rather than outright purchases. [Daily Dot]

The use case for AI is spam news sites for ad revenue. [NewsGuard]

The use case for AI is spam phone calls for automated scamming — using AI to clone people’s voices. [CBS]

The use case for AI is spam Amazon reviews and spam tweets. [Vice]

The use case for AI is spam videos that advertise malware. [DigitalTrends]

The use case for AI is spam sales sites on Etsy. [The Atlantic, archive]

The use case for AI is spam science fiction story submissions. Clarkesworld had to close submissions because of the flood of unusable generated garbage. The robot apocalypse in action. [The Register]

Supertoys last all summer long

End users don’t actually want AI-based products. Machine learning systems can generate funny text and pictures to show your friends on social media. But even that’s wearing thin — users mostly see LLM output in the form of spam.

LLM writing style and image generator drawing style are now seen as signs of low quality work. You can certainly achieve artistic quality with AI manipulation, as in this music video — but even this just works on its novelty value. [YouTube]

For commercial purposes, the only use case for AI is still to replace quality work with cheap ersatz bot output — in the hope of beating down labor costs.

Even then, the AI just isn’t up to the task.

Microsoft put $10 billion into OpenAI. The Bing search engine added AI chat — and it had almost no effect on user numbers. It turns out that search engine users don’t want weird bot responses full of errors. [ZDNet]

The ChatGPT website’s visitor numbers went down 10% in June 2023. LLM text generators don’t deliver commercial results, and novelty only goes so far. [Washington Post]

After GPT-3 came out, OpenAI took three years to make an updated version. GPT-3.5 was released as a stop-gap in October 2022. Then GPT-4 finally came out in March 2023! But GPT-4 turns out to be eight instances of GPT-3 in a trenchcoat. The technology is running out of steam. [blog post; Twitter, archive]

Working at all will be in the next version

The deeper problem is that many AI systems simply don’t work. The 2022 paper “The fallacy of AI functionality” notes that AI systems are often “constructed haphazardly, deployed indiscriminately, and promoted deceptively.”

Still, machine learning systems do some interesting things, a few of which are even genuinely useful. We asked GitHub and they told us that they encourage their own employees to use the GitHub Copilot AI-based autocomplete system for their own internal coding — with due care and attention. We know of other coders who find Copilot to be far less work than doing the boilerplate by hand.

(Though Google has forbidden its coders from using its AI chatbot, Bard, to generate internal code.) [The Register]

Policy-makers and scholars — not just the media — tend to propagate AI hype. Even if they try to be cautious, they may work in terms of ethics of deployment, and presume that the systems do what they’re claimed to do — when they often just don’t.

Ethical considerations come after you’ve checked basic functionality. Always put functionality first. Does the system work? Way too often, it just doesn’t. Test and measure. [arXiv, PDF, 2022]

AI is the new crypto mining

In 2017, the hot buzzword was “blockchain” — because the price of bitcoin was going up. Struggling businesses would add the word “blockchain” to their name or their mission statement, in the hope their stock price would go up. Long Island Iced Tea became Long Blockchain and saw its shares surge 394%. Shares in biotech company Bioptix doubled in price when it changed its name to Riot Blockchain and pivoted to bitcoin mining. [Bloomberg, 2017, archive; Bloomberg, 2017, archive]

The same is now happening with AI. Only it’s not just the venture capitalists — even the crypto miners are pivoting to AI.

Bitcoin crashed last year and crypto mining is screwed. As far as we can work out, the only business plan was to get foolish investors’ money during the bubble, then go bankrupt.

In mid-2024, the bitcoin mining reward will halve again. So the mining companies are desperate to find other sources of income.

Ethereum moved to proof of stake in September 2022 and told its miners to just bugger off. Ethereum was mined on general-purpose video cards — so miners have a glut of slightly-charred number crunching machinery.

Hive Blockchain in Vancouver is pivoting to AI to repurpose its pile of video cards. It’s also changed its name to Hive Digital Technologies. [Bloomberg, archive; press release]

Marathon Digital claims that “over time you’re going to see that blockchain technologies and AI have a very tight coupling.” No, us neither. Marathon is doubling and tripling down on bitcoin mining — but, buzzwords! [Decrypt]

Nvidia makes the highest-performance video cards. The GPU processors on these cards turn out to be useful for massively parallel computations in general — such as running the calculations needed to train machine learning models. Nvidia is having an excellent year and its market cap is over $1 trillion.

So AI can take over from crypto in yet another way — carbon emissions from running all those video cards.

AI’s massive compute load doesn’t just generate carbon — it uses huge amounts of fresh water for cooling. Microsoft’s water usage went up 34% between 2021 and 2022, and they blame AI computation. ChatGPT uses about 500 mL of water every time you have a conversation with it. [AP]

We don’t yet have a Digiconomist of AI carbon emissions. Go start one.

Permalink
March 28, 2024 at 10:35:34 PM GMT+1

Cybersecurity is brokenhttps://crankysec.com/blog/broken/

  • Cybersecurity
  • Enshitification
  • Cybersecurity
  • Enshitification

Cybersecurity is broken

27 March 2024

It is a well-known fact that we dish out a whole lot of shit talk around these parts. And by "we" I mean me, but that's beside the point. Talking smack about 10-ply LinkedIn vCISOs is, quite honestly, pretty easy and kind of satisfying because some 8 out of 10 times they are stupid as fuck and deserve the heckling. The remaining 2 out of 10 are maybe trying to fight the good fight, and do right by their teams. Maybe. Don't you quote me on that figure. Actually, best you don't quote me at all because there are peeps out there saying things that are much more clever. Take this quote(?) from one Bob Metcalfe (tks, snowcrasher!)

"The Stockings Were Hung by the Chimney with Care"

The ARPA Computer Network is susceptible to security violations for at least
the three following reasons:

(1) Individual sites, used to physical limitations on machine access, have
not yet taken sufficient precautions toward securing their systems
against unauthorized remote use. For example, many people still use
passwords which are easy to guess: their fist names, their initials,
their host name spelled backwards, a string of characters which are
easy to type in sequence (e.g. ZXCVBNM).

(2) The TIP allows access to the ARPANET to a much wider audience than
is thought or intended. TIP phone numbers are posted, like those
scribbled hastily on the walls of phone booths and men's rooms. The
TIP required no user identification before giving service. Thus,
many people, including those who used to spend their time ripping off
Ma Bell, get access to our stockings in a most anonymous way.

(3) There is lingering affection for the challenge of breaking
someone's system. This affection lingers despite the fact that
everyone knows that it's easy to break systems, even easier to
crash them.

All of this would be quite humorous and cause for raucous eye
winking and elbow nudging, if it weren't for the fact that in
recent weeks at least two major serving hosts were crashed
under suspicious circumstances by people who knew what they
were risking; on yet a third system, the system wheel password
was compromised -- by two high school students in Los Angeles
no less.

We suspect that the number of dangerous security violations is
larger than any of us know is growing. You are advised
not to sit "in hope that Saint Nicholas would soon be there".

That's from 1973. The dude who invented Ethernet was worried about what we now call cybersecurity fifty fucking years ago. Several wake-up calls happened since then: phreaking peeps exploding the phones, hacker supergroups testifying in front of the US Senate on the topic of cybersecurity, hacker supergroups releasing super easy to use RATs, a cornucopia of malware, including shit made by nation-states, and ransomware attacks that are only profitable because some people just decided that an inefficient distributed database was worth some money. A lot of those issues were only made possible by people's insistence on using a programming language from half a century ago when better options are available. And that's just the technical side of things.

Take, for example, the Pen Test Partners' research on Peloton's API security. The researchers were able to grab a whole bunch of data that was supposed to be private, disclosed the issue to Peloton who, in turn, basically ghosted the researcher until a TechCrunch reporter got involved. Classic case of "we're not sorry we suck at protecting our customers' data, we're just sorry we got caught." I mean, if you need to get fucking TechCrunch involved to be taken seriously, the situation is hopeless.

Absolutely no amount of gentle pleas disguised as executive orders from the White House urging people to use memory-safe languages will solve the problem. CISA, despite all the phenomenal work they do, can't charge people who mishandle data with negligence; critical infrastructure involved or not. And maybe they should.

You see, cybersecurity is broken because of the lack of consequences. It's really that simple. When literally nothing happens when some stupid service gets popped and loses your data they had no business collecting in the first place, this kind of thing will happen over and over and over again. Why the fuck do you need my home address just so I can copy and paste some GIFs? Because you want to sell this data to data brokers, and you know there will be absolutely no negative consequences if you mishandle this data, fucking over the people who keep your business afloat. So, companies big and small fuck things up and we need to clean up the mess and face the consequences. Sounds about right.

Cybersecurity is even more broken when these companies that face zero consequences look at their payroll and think "Wait a fucking minute! Why the hell are we spending six full dollars a year on cybersecurity when we can, I dunno, do nothing at all for free because cybersecurity incidents will not negatively impact our bottomline whatsoever?" That's why you, my cybersecurity and infosec brethren, are getting laid off. That's why you don't have the tools you need. That's why you don't get the training you should. That's why you're overworked. That's why you're stuck as an underpaid individual contributor doing the work of 5 people for $75k/year while your CISO who makes $500k is on LinkedIn all day writing stupid shit about AI.

Cybersecurity is broken because fixing it benefits no one but the regular, unremarkable, salt of the earth folks. And, according to the powers that be, fuck them folks. Fixing it requires strong data protection laws, but passing laws is just not something the overwhelming majority of legislative bodies in the world do. Passing laws that slightly inconvenience multi-billion dollar corporations while improving the lives of the plebes is even more of a tall order. And that's true for a whole lot of things that have nothing to do with cybersecurity, but this is a blog about cybersecurity, so please bear with me.

That's the answer: data protection laws. You get my data for rEaSoNs, and you fuck it up? You should pay a hefty price that cannot be written off as the cost of doing business. You make data brokers illegal, or, at the very least, way less profitable. You do what the payment card industry has been doing for decades: you tell everyone handling your data that they have to follow a very comprehensive set of data protection rules, lest they get fined or cut off entirely. A group of four credit card companies can do that, so I'm sure mighty governments can, too.

But how do we push things in the right direction? Well, that's one of the many topics we discuss in our Discord server (Hey you guys!). Not only are my fellow Crankies inspiring the shit out of me every day, we have bigger plans than just shitposting and commiserating. Turns out that buying a congressperson lobbying is not that expensive, really. We are working on something that we hope will help lift everyone in this industry up. As I once wrote on that very Discord: "When we abstain from using our collective power of influence, we lose by default." Or "you miss 100% of the shots you don't take" or whatever the fuck Gretzky said. We're about 700-strong and planning on doing great things. Come join us because the movement cannot be stopped.

Previous: Pigeons As Far As The Eye Can See

Permalink
March 28, 2024 at 10:32:11 PM GMT+1

Losing the imitation gamehttps://jenniferplusplus.com/losing-the-imitation-game/

  • Artificial intelligence
  • Artificial intelligence

Losing the imitation game

AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

If you've been anywhere near major news or social media in the last few months, you've probably heard repeatedly about so-called AI, ChatGPT, and large language models (LLMs). The hype surrounding these topics has been intense. And the rhetoric has been manipulative, to say the least. Proponents have claimed that their models are or soon will be generally intelligent, in the way we mean humans are intelligent. They're not. They've claimed that their AI will eliminate whole categories of jobs. And they've claimed that developing these systems further and faster is both necessary and urgent, justified by science fiction dressed up as arguments for some sort of "safety" that I find to be incoherent.

The outer layer of hype surrounding AI—and LLM chatbots in particular—is that they will become indispensable tools of daily work, and entirely replace people in numerous categories of jobs. These claims have included the fields of medicine, law, and education, among others. I think it's nonsense. They imagine self-teaching classrooms and self-diagnosing fitness gadgets. These things will probably not even work as well as self-driving cars, which is to say: only well enough to be dangerous. Of course, that's not stopping people from pushing these fantasies, anyway. But these fields are not my area of expertise. My expertise is in software engineering. We should know better, but software developers are falling victim to the same kind of AI fantasies.

A computer can never be held accountable. Therefore, a computer must never make a management decision.

While the capabilities are fantasy, the dangers are real. These tools have denied people jobs, housing, and welfare. All erroneously. They have denied people bail and parole, in such a racist way it would be comical if it wasn't real. And the actual function of AI in all of these situations is to obscure liability for the harm these decisions cause.

So-Called AI

Artificial Intelligence is an unhelpful term. It serves as a vehicle for people's invalid assumptions. It hand-waves an enormous amount of complexity regarding what "intelligence" even is or means. And it encourages people confuse concepts like cognition, agency, autonomy, sentience, consciousness, and a host of related ideas. However, AI is the vernacular term for this whole concept, so it's the one I'll use. I'll let other people push that boulder, I'm here to push a different one.

Those concepts are not simple ideas, either. Describing them gets into hard questions of psychology, neurology, anthropology, and philosophy. At least. Given that these are domains that the tech field has routinely dismissed as unimportant for decades, maybe it shouldn't be surprising that techies as a group are now completely unprepared to take a critical view of claims about AI.

The Turing Test

Certainly part of how we got here is the Turing test. That is, the pop science reduction of Alan Turing's imitation game. The actual proposal is more substantial. And taking it seriously produces some interesting reading. But the common notion is something like a computer is intelligent if it can reliably pass as human in conversation. I hope seeing it spelled out like that makes it clear how dramatically that overreaches. Still, it's the framework that people have, and it informs our situation. I think the bit that is particularly informative is the focus on natural, conversational language. And also, the deception inherent in the imitation game scenario, but I'll come back to that.

Our understanding of intelligence is a moving target. We only have one meaningful fixed point to work from. We assert that humans are intelligent. Whether anything else is, is not certain. What intelligence itself is, is not certain. Not too long ago, a lot of theory rested on our ability to create and use tools. But then that ability turned out to be not as rare as we thought, and the consensus about the boundaries of intelligence shifted. Lately, it has fallen to our use of abstract language. That brings us back to AI chatbots. We suddenly find ourselves confronted with machines that seem to have a command of the English language that rivals our own. This is unfamiliar territory, and at some level it's reasonable that people will reach for explanations and come up with pop science notions like the Turing test.

Language: any system of formalized symbols, signs, sounds, gestures, or the like used or conceived as a means of communicating thought, emotion, etc.

Language Models

ChatGPT and the like are powered by large language models. Linguistics is certainly an interesting field, and we can learn a lot about ourselves and each other by studying it. But language itself is probably less than you think it is. Language is not comprehension, for example. It's not feeling, or intent, or awareness. It's just a system for communication. Our common lived experiences give us lots of examples that anything which can respond to and produce common language in a sensible-enough way must be intelligent. But that's because only other people have ever been able to do that before. It's actually an incredible leap to assume, based on nothing else, that a machine which does the same thing is also intelligent. It's much more reasonable to question whether the link we assume exists between language and intelligence actually exists. Certainly, we should wonder if the two are as tightly coupled as we thought.

That coupling seems even more improbable when you consider what a language model does, and—more importantly—doesn't consist of. A language model is a statistical model of probability relationships between linguistic tokens. It's not quite this simple, but those tokens can be thought of as words. They might also be multi-word constructs, like names or idioms. You might find "raining cats and dogs" in a large language model, for instance. But you also might not. The model might reproduce that idiom based on probability factors instead. The relationships between these tokens span a large number of parameters. In fact, that's much of what's being referenced when we call a model large. Those parameters represent grammar rules, stylistic patterns, and literally millions of other things.

What those parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't know things. It models how those tokens are used. When the model works on a token like "Jennifer", there are parameters and classifications that capture what we would recognize as things like the fact that it's a name, it has a degree of formality, it's feminine coded, it's common, and so on. But the model doesn't know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.

Mental Models

So, a language model can reproduce patterns of language. And there's no particular reason it would need to be constrained to natural, conversational language, either. Anything that's included in the set of training data is fair game. And it turns out that there's been a lot of digital ink spent on writing software and talking about writing software. Which means those linguistic patterns and relationships can be captured and modeled just like any other. And sure, there are some programming tasks where just a probabilistic assembly of linguistic tokens will produce a result you want. If you prompt ChatGPT to write a python function that fetches a file from S3 and records something about it in DynamoDB, I would bet that it just does, and that the result basically works. But then, if you prompt ChatGPT to write an authorization rule for a new role in your application's proprietary RBAC system, I bet that it again just does, and that the result is useless, or worse.

Programming as Theory Building

Non-trivial software changes over time. The requirements evolve, flaws need to be corrected, the world itself changes and violates assumptions we made in the past, or it just takes longer than one working session to finish. And all the while, that software is running in the real world. All of the design choices taken and not taken throughout development; all of the tradeoffs; all of the assumptions; all of the expected and unexpected situations the software encounters form a hugely complex system that includes both the software itself and the people building it. And that system is continuously changing.

The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.

To circle back to AI like ChatGPT, recall what it actually does and doesn't do. It doesn't know things. It doesn't learn, or understand, or reason about things. What it does is probabilistically generate text in response to a prompt. That can work well enough if the context you need to describe the goal is so simple that you can write it down and include it with the prompt. But that's a very small class of essentially trivial problems. What's worse is there's no clear boundary between software development problems that are trivial enough for an LLM to be helpful vs being unhelpful. The LLM doesn't know the difference, either. In fact, the LLM doesn't know the difference between being tasked to write javascript or a haiku, beyond the different parameters each prompt would activate. And it will readily do a bad job of responding to either prompt, with no notion that there even is such a thing as a good or bad response.

Software development is complex, for any non-trivial project. But complexity is hard. Overwhelmingly, when we in the software field talk about developing software, we've dealt with that complexity by ignoring it. We write code samples that fit in a tweet. We reduce interviews to trivia challenges about algorithmic minutia. When we're feeling really ambitious, we break out the todo app. These are contrivances that we make to collapse technical discussions into an amount of context that we can share in the few minutes we have available. But there seem to be a lot of people who either don't understand that or choose to ignore it. They frame the entire process of software development as being equivalent to writing the toy problems and code samples we use among general audiences.

Automating the Easy Part

The intersection of AI hype with that elision of complexity seems to have produced a kind of AI booster fanboy, and they're making personal brands out of convincing people to use AI to automate programming. This is an incredibly bad idea. The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code. They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it. No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do. Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.

[A programmer's] education has to emphasize the exercise of theory building, side by side with the acquisition of knowledge of data processing and notations.

But it's worse than AI being merely inadequate for software development. Developing that mental model requires learning about the system. We do that by exploring it. We have to interact with it. We manipulate and change the system, then observe how it responds. We do that by performing the easy, simple programing tasks. Delegating that learning work to machines is the tech equivalent of eating our seed corn. That holds true beyond the scope of any team, or project, or even company. Building those mental models is itself a skill that has to be learned. We do that by doing it, there's not another way. As people, and as a profession, we need the early career jobs so that we can learn how to do the later career ones. Giving those learning opportunities to computers instead of people is profoundly myopic.

Imitation Game

If this is the first time you're hearing or reading these sentiments, that's not too surprising. The marketing hype surrounding AI in recent months has been intense, pervasive, and deceptive. AI is usually cast as being hyper competent, and superhuman. To hear the capitalists who are developing it, AI is powerful, mysterious, dangerous, and inevitable. In reality, it's almost none of those things. I'll grant that AI can be dangerous, but not for the reasons they claim. AI is complicated and misunderstood, and this is by design. They cloak it in rhetoric that's reminiscent of the development of atomic weapons, and they literally treat the research like an arms race.

I'm sure there are many reasons they do this. But one of the effects it has is to obscure the very mundane, serious, and real harms that their AI models are currently perpetuating. Moderating the output of these models depends on armies of low paid and precariously employed human reviewers, mostly in Kenya. They're subjected to the raw, unfiltered linguistic sewage that is the result of training a language model on uncurated text found on the public internet. If ChatGPT doesn't wantonly repeat the very worst of the things you can find on reddit, 4chan, or kiwi farms, that is because it's being dumped on Kenyan gig workers instead.

That's all to say nothing of the violations of intellectual property and basic consent that was required to train the models in the first place. The scale of the theft and exploitation required to build the data sets these models train with is almost inconceivable. And the energy consumption and e-waste produced by these systems is staggering.

All of this is done to automate the creation of writing or media that is designed to deceive people. It's intended to seem like people, or like work done by people. The deception, from both the creators and the AI models themselves, is pervasive. There may be real, productive uses for these kinds of tools. There may be ways to build and deploy them ethically and sustainably. But that's not the situation with the instances we have. AI, as it's been built today, is a tool to sell out our collective futures in order to enrich already wealthy people. They like to frame it as being akin to nuclear science. But we should really see it as being more like fossil fuels

Permalink
March 5, 2024 at 11:13:32 PM GMT+1

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet - ABC Newshttps://www.abc.net.au/news/science/2024-02-28/twitter-x-fighting-bot-problem-as-ai-spam-floods-the-internet/103498070

  • Social Network
  • Artificial intelligence
  • Societal Collapse
  • Social Network
  • Artificial intelligence
  • Societal Collapse

Twitter is becoming a 'ghost town' of bots as AI-generated spam content floods the internet

ABC Science / By technology reporter James Purtill

Parts of the web are now dominated by bots and junk websites designed to go unread by humans.

One morning in January this year, marine scientist Terry Hughes opened X (formerly Twitter) and searched for tweets about the Great Barrier Reef.

"I keep an eye on what's being tweeted about the reef every day," Professor Hughes, a leading coral researcher at James Cook University, said.

What he found that day surprised and confused him; hundreds of bot accounts tweeting the same strange message with slightly different wording.

"Wow, I had no idea that agricultural runoff could have such a devastating impact on the Great Barrier Reef," one account, which otherwise spruiked cryptocurrencies, tweeted.

Another crypto bot wrote: "Wow, it's disheartening to hear about the water pollution challenges Australia faces."

And so on. Hundreds of crypto accounts tweeting about agricultural runoff.

A month later, it happened again. This time, bots were tweeting about "marine debris" threatening the Great Barrier Reef.

What was going on?

When Professor Hughes tweeted what he'd found, some saw a disinformation conspiracy, an attempt to deflect attention from climate change.

The likely answer, however, is more mundane, but also more far-reaching in its implications.

More than a year since Elon Musk bought X with promises to get rid of the bots, the problem is worse than ever, experts say.

And this is one example of a broader problem affecting online spaces.

The internet is filling up with "zombie content" designed to game algorithms and scam humans.

It's becoming a place where bots talk to bots, and search engines crawl a lonely expanse of pages written by artificial intelligence (AI).

Junk websites clog up Google search results. Amazon is awash with nonsense e-books. YouTube has a spam problem.

And this is just a trickle in advance of what's been called the "great AI flood".

Bots liking bots, talking to other bots

But first, let's get back to those reef-tweetin' bots.

Timothy Graham, an expert on X bot networks at the Queensland University of Technology, ran the tweets through a series of bot and AI detectors.

Dr Graham found 100 per cent of the text was AI-generated.

"Overall, it appears to be a crypto bot network using AI to generate its content," he said.

"I suspect that at this stage it's just trying to recruit followers and write content that will age the fake accounts long enough to sell them or use them for another purpose."

That is, the bots probably weren't being directed to tweet about the reef in order to sway public opinion.

Dr Graham suspects these particular bots probably have no human oversight, but are carrying out automated routines intended to out-fox the bot-detection algorithms.

Searching for meaning in their babble was often pointless, he said.

"[Professor Hughes] is trying to interpret it and is quite right to try and make sense of it, but it just chews up attention, and the more engagement they get, the more they are rewarded.

The cacophony of bot-talk degrades the quality of online conversations. They interrupt the humans and waste their time.

"Here's someone who is the foremost research scientist in this space, spending their time trying to work out the modus operandi of these accounts."

In this case, the bots were replying to the tweet of another bot, which, in turn, replied to the tweets of other bots, and so on.

One fake bot account was stacked on top of the other, Dr Graham said.

"It's AI bots all the way down."

How bad is X's bot problem?

In January, a ChatGPT glitch appeared to shine a light on X's bot problem.

For a brief time, some X accounts posted ChatGPT's generic response to requests that it deems outside of its content policy, exposing them as bots that use ChatGPT to generate content.

Users posted videos showing scrolling feeds with numerous accounts stating "I'm sorry, but I cannot provide a response to your request as it goes against OpenAl's content policy."

"Twitter is a ghost town," one user wrote.

But the true scale of X's bot problem is difficult for outsiders to estimate.

Shortly after Mr Musk gained control of X while complaining about bots, X shut down free access to the programming interface that allowed researchers to study this problem.

That left researchers with two options: pay X for access to its data or find another way to peek inside.

Towards the end of last year, Dr Graham and his colleagues at QUT paid X $7,800 from a grant fund to analyse 1 million tweets surrounding the first Republican primary debate.

They found the bot problem was worse than ever, Dr Graham said at the time.

Later studies support this conclusion. Over three days in February, cybersecurity firm CHEQ tracked the proportion of bot traffic from X to its clients' websites.

It found three-quarters of traffic from X was fake, compared to less than 3 per cent of traffic from each of TikTok, Facebook and Instagram.

"Terry Hughes' experience is an example of what's going on on the platform," Dr Graham said.

"One in 10 likes are from a porn bot, anecdotally."

The rise of a bot-making industry

So what's the point of all these bots? What are they doing?

Crypto bots drive up demand for certain coins, porn bots get users to pay for porn websites, disinformation bots peddle fake news, astroturfing bots give the impression of public support, and so on.

Some bots exist purely to increase the follower counts and engagement statistics of paying customers.

A sign of the scale of X's bot problem is the thriving industry in bot-making.

Bot makers from around the world advertise their services on freelancer websites.

Awais Yousaf, a computer scientist in Pakistan, sells "ChatGPT Twitter bots" for $30 to $500, depending on their complexity.

In an interview with the ABC, the 27-year-old from Gujranwala said he could make a "fully fledged" bot that could "like comments on your behalf, make comments, reply to DMs, or even make engaging content according to your specification".

Mr Yousaf's career tracks the rise of the bot-making economy and successive cycles of internet hype.

Having graduated from university five years ago, he joined Pakistan's growing community of IT freelancers from "very poor backgrounds".

Many of the first customers wanted bots to promote cryptocurrencies, which were booming in popularity at the time.

"Then came the NFT thing," he said.

A few years ago he heard about OpenAI's GPT3 language model and took a three-month break to learn about AI.

"Now, almost 90 per cent of the bots I do currently are related to AI in one way or another.

"It can be as simple as people posting AI posts regarding fitness, regarding motivational ideas, or even cryptocurrency predictions."

In five years he's made 120 Twitter bots.

Asked about Mr Musk's promise to "defeat the spam bots," Mr Yousaf smiled.

"It's hard to remove Twitter bots from Twitter because Twitter is mostly bot."

AI-generated spam sites may overwhelm search engines

X's bot problem may be worse than other major platforms, but it's not alone.

A growing "deluge" of AI content is flooding platforms that were "never designed for a world where machines can talk with people convincingly", Dr Graham said.

"It's like you're running a farm and had never heard of a wolf before and then suddenly you have new predators on the scene.

"The platforms have no infrastructure in place. The gates are open."

The past few months have seen several examples of this.

Companies are using AI to rewrite other media outlet's stories, including the ABC's, to then publish them on the company's competing news websites.

A company called Byword claims it stole 3.6 million in "total traffic" from a competitor by copying their site and rewriting 1,800 articles using AI.

"Obituary pirates" are using AI to create YouTube videos of people summarising the obituaries of strangers, sometimes fabricating details about their deaths, in order to capture search traffic.

Authors are reporting what appear to be AI-generated imitations and summaries of their books on Amazon.

Google's search results are getting worse due to spam sites, according to a recent pre-print study by German researchers.

The researchers studies search results for thousands of product-review terms across Google, Bing and DuckDuckGo over the course of a year.

They found that higher-ranked pages tended to have lower text quality but were better designed to game the search ranking algorithm.

"Search engines seem to lose the cat-and-mouse game that is SEO spam," they wrote in the study.

Co-author Matti Wiegman from Bauhaus University, Weimar said this rankings war was likely to get much worse with the advent of AI-generated spam.

"What was previously low-quality content is now very difficult to distinguish from high-quality content," he said.

"As a result, it might become difficult to distinguish between authentic and trustworthy content that is useful and content that is not."

He added that the long-term effects of AI-generated content on search engines was difficult to judge.

AI-generated content could make search more useful, he said.

"One possible direction is that generated content will become better than the low-quality human-made content that dominates some genres in web search, in which case the search utility will increase."

Or the opposite will happen. AI-generated content will overwhelm "vulnerable spaces" such as search engines and "broadcasting-style" social media platforms like X.

In their place, people may turn to "walled gardens" and specialised forums with smaller numbers of human-only members.

Platforms prepare for coming flood

In response to this emerging problem, platforms are trialling different strategies.

Meta recently announced it was building tools to detect and label AI-generated images posted on its Facebook, Instagram and Threads services.

Amazon has limited authors to uploading a maximum of three books to its store each day, although authors say that hasn't solved the problem.

X is trialling a "Not a Bot" program in some countries where it charges new users $1 per year for basic features.

This program operates alongside X's verification system, where users pay $8 per month to have their identity checked and receive a blue tick.

But it appears the bot-makers have found a way around this.

All the reef-tweeting crypto bots Professor Hughes found were verified accounts.

"It's clutter on the platform that's not necessary. You'd wish they'd clean it up," the coral scientist said.

"It wastes everyone's time."

Permalink
March 5, 2024 at 11:06:38 PM GMT+1

Un appel à démanteler l’intelligence artificiellehttps://polaris.imag.fr/romain.couillet/docs/articles/IA_dellusion.pdf

  • Artificial intelligence
  • Societal Collapse
  • Technopaganism
  • Technosolutionism
  • Artificial intelligence
  • Societal Collapse
  • Technopaganism
  • Technosolutionism

Un appel à démanteler l’intelligence artificielle

Romain Couillet le 22 juillet 2022

Professeur en informatique et chercheur jusqu’à récemment en mathématiques appliquées pour l’intelligence artificielle, j’ai été récemment sollicité comme membre du jury de soutenance du projet de fin d’études d’un étudiant en master d’informatique de l’Université Grenoble-Alpes.

L’étudiant motivait son projet par la nécessité de répondre à la double problématique suivante : l’entreprise chez qui il effectuait son stage ne parviendrait d’une part plus à recruter d’expert·es en design de circuits électroniques et, par ailleurs, la pénurie de métaux impose des contraintes croissantes sur la dimension (et donc la quantité de matière sollicitée) de ces mêmes circuits électroniques.

Face à ces enjeux, l’entreprise a proposé de développer un algorithme d’intelligence artificielle capable de combler l’expertise perdue (et de faire potentiellement mieux).
Sans aller dans les détails conceptuels de l’étude menée par l’étudiant, il apparaissait assez rapidement au cours de la présentation que l’approche proposée ne pouvait pas fonctionner et qu’en réalité il était fort présomptueux d’imaginer qu’un algorithme puisse effectuer la tâche souhaitée.

Le bilan des quatre premiers mois de stage n’était donc pas surprenant : en l’état, du point de vue de l’étudiant, la méthode développée ne fonctionnait pas encore mais les travaux étaient prometteurs. Situation somme toute classique que je rencontre de plus en plus fréquemment ces dernières années. Mais le point saillant de cette anecdote est ailleurs : au cours de la séance de questions du jury, j’ai demandé à l’étudiant si, à l’issue de ces quatre premiers mois clairement infructueux quoi qu’il en dise, il envisageait comme hypothèse la possibilité que l’intelligence artificielle puisse ne pas être en mesure de résoudre son problème.

En dépit du caractère éminemment rhétorique de la question, sa réponse sera extrêmement révélatrice : “non, je suis sûr que ça fonctionnera”. Tout est là. À elle seule, cette affirmation met selon moi en lumière le danger le plus nocif de l’intelligence artificielle : le caractère religieux et aveuglant qu’elle a acquis en à peine dix ans.

Avant propos : un détour paléosophique

Dans son travail sur la Paléosophie, Catherine Reine Thomas nous invite à repenser la cosmologie occidentale, celle qui a fait naître les premières civilisations agricoles il y a dix mille ans pour devenir la société dans laquelle nous européen·nes vivons, comme un déséquilibre dans le rapport entre la “Vie” et la “Technique”.

L’une des singularités du genre Homo, et plus particulièrement d’Homo Sapiens, par rapport aux autres animaux sont sa capacité et son envie de développer des outils et des savoir-faire. L’ensemble de ces outils engendre un spectre de technologies qui alimente alors une entité non-vivante mais bien réelle et dynamique que Thomas nomme Technique.

L’animalité d’Homo Sapiens exigerait néanmoins, au même titre que les autres animaux, de conserver un contrôle sur son milieu, une puissance d’agir pour emprunter les termes de Spinoza, que Thomas appelle la Vie.

Les entités Technique et Vie entrent dès lors dans une compétition intérieure dont l’équilibre est maintenu par l’activité artistique : Homo Sapiens n’est pas esclave d’une technologie tant que l’outil ou la technique garde un ancrage dans le champ de la Vie. Les dessins et sculptures (dès les premières peintures rupestres et objets décoratifs) sont autant d’expressions vivantes de techniques et savoir-faire utilisés par ailleurs, mais non exclusivement, pour le besoin des outils.

Cette dualité stable entre Vie et Technique assurerait un lien sain et serein entre Homo Sapiens et son environnement. Dans son ethnographie du peuple Achuar d’Amazonie qui entretient une relation animiste au milieu [^1], Descola rapporte que les Achuar ne travaillent que quatre à cinq heures par jour (aux champs, à la chasse, à l’artisanat) pour dédier le plus clair de leur temps aux chants, à la confection d’ornements, aux pratiques spirituelles et autres activités artistiques.

Selon Thomas, la Technique, vue comme entité non-vivante mais symbiotique au vivant Homo Sapiens peut devenir parasitaire lorsqu’elle prend le pas sur la Vie : c’est-à-dire lorsque le contrôle artistique par Homo Sapiens disparait au profit d’un développement non maîtrisé des technologies. Dès lors, la Technique se nourrit de l’énergie métabolique d’Homo Sapiens pour devenir autonome, tels les robots de l’univers dystopique d’Isaac Asimov difficilement tenus sous le contrôle des trois lois de la robotique.

Cet angle de vue paléosophique par la dualité Vie-Technique est intéressant à plusieurs égards : d’une part, il rejette le fatalisme, grandement véhiculée par les best-seller Sapiens d’Harari ou Effondrement de Diamond, d’une espèce humaine prédatrice et vouée à l’auto-destructrion. L’espèce humaine serait au contraire sous le joug de la Technique qui cultive Homo Sapiens comme les termites cultivent les champignons qui digèrent pour eux la cellulose du bois.

Il permet d’autre part d’expliquer l’inexplicable : pourquoi Homo Sapiens, en dépit des évidences scientifiques, des solutions technologiques existantes (permaculture, outillage low-tech, communs) et des connaissances neuropsychologiques et sociales disponibles (lien rompu au vivant, bienfaits des pratiques écopsychologiques, évidence d’une entraide perdue mais gravée dans notre ADN) ne parvient pas à se défaire de ses technologies mortifères et de son comportement agressif envers le vivant, colonial et extractiviste ? L’analyse paléosophique résoud aussi le verrouillage du triangle des responsabilités entre citoyen, gouvernement et entreprise qui veut que les citoyen·nes reprochent l’inaction gouvernementale et la vénalité des entreprises, tandis que les entreprises n’ont de choix que de s’aligner aux contraintes gouvernementales et ne sont pas responsables des besoins compulsifs des citoyens, et que les gouvernements défendent leurs actions comme conséquences du vote citoyen et des pressions économiques des entreprises.

En somme, tout le monde est responsable et personne ne l’est. Ici Catherine Reine Thomas avancerait sûrement que la Technique, ignorée dans ce triptyque, porte en réalité le poids de la chaîne deresponsabilités : elle assujettit le citoyen dans la réalisation de ses besoins techniques, maintient la survie de l’entreprise qui n’a de raison d’être qu’en perpétuant l’alimentation technologique et neutralise le gouvernement dans sa nécessité de croissance économique par la technologie.

Difficile de ne pas voir ici une terrible analogie entre la Technique devenue parasite d’une humanité au bord du chaos et l’Ophiocordyceps Unilateralis, ce champignon qui pénètre le corps des fourmis, les incite à monter aussi haut que possible le long de la tige d’une plante charnue avant de finalement s’y accrocher puissamment par les mandibules et de se faire dévorer de l’intérieur par le champignon (qui peut dès lors se développer dans des conditions optimales et propager efficacement ses spores).

Car le développement accéléré des outils et technologies a rompu l’équilibre Technique-Vie, engendrant des conséquences destructrices aux dynamiques exponentielles : agriculture intensive dont les outils et ressources sont aujourd’hui en dehors du contrôle des paysan·nes (machines irréparables, niveaux d’endettement imposant une productivité assurée par l’usage d’engrais de synthèse et pesticides), exigences et conflits de ressources (croissance intenable des populations, guerres, colonisations, génocides et écocides), fracture du lien au vivant (urbanisation et artificialisation des sols, extractivisme minéral, cybernétisation, ontologie naturaliste [^2]), déshumanisation physique et psychologique (maladies de civilisation, épidémies, pertes de sens, troubles neuropsychologiques, dépressions, fractures identitaire et sociale).

Le champignon Technique dévore notre humanité de l’intérieur, par le biais de l’extinction inexorable de l’ensemble du vivant. On retrouve bien là les nombreux dépassements de convivialité de l’outil dans la terminologie d’Ivan Illich [^3] : au delà d’un certain seuil de complexité, l’outil sort du contrôle de l’humain et passe d’un moyen technique à une fin en soi. Cette fin en soi se mue dans le pire des cas en ce qu’Illich appelle un monopole radical qui transforme la société autour de l’outil : il n’est par exemple plus possible de vivre sans camions pour transporter les aliments, sans informatique pour gérer les chaînes logistiques ou les transferts financiers, sans vaccins pour amortir les conséquences de la surpopulation, etc.

La Technique est ainsi devenue religion, fluidifiée par le fétiche de l’argent, qui impose une croyance techno-solutionniste à quelques huit milliards d’Homo Sapiens dépourvus de la capacité de contrôle des technologies (absolument personne ne peut fabriquer ou réparer seule une quelconque technologie numérique moderne) et qui pour beaucoup ont perdu l’équilibrage du moteur de la Vie (perte de sens pour les occidentaux, soumission coloniale pour les habitant·es des pays du Sud [^4]).

À défaut de maîtriser l’outil, Homo Sapiens, désormais dépendant des technologies et de l’énergie fossile qui les nourrit (comme l’explique Jean-Baptiste Fressoz dans L’événement Anthropocène, nous ne pouvons plus vivre sans pétrole), s’en remet religieusement au maintien, à l’entretien et au développement d’un système technique paradoxalement occulté des débats politiques (on ne remet que rarement en question l’utilité des technologies) alors qu’il pèse aujourd’hui cinq fois le poids du vivant [^5].

Le détail de cette courte analyse paléosophique est certainement un peu plus complexe et mérite de s’y appesantir un instant. La production technique moderne s’effectue en effet par le biais d’ingénieur·es et chercheur·ses pour la plupart parfaitement ancré·es dans un équilibre Technique-Vie de la bricoleuse d’équations passionnée ou de l’insassiable manipulateur de tubes à essais.

Mais tous·tes deux vivent, au même titre que l’expert marketing ou la chef d’entreprise, dans autant de tours d’ivoire imperméables aux considérations systémiques complexes (est-ce que l’outil dont je prends une maigre part au développement intègrera un produit potentiellement nocif pour la société ?) et qu’il est supposé sage d’ignorer, le travail scientifique factuel de l’expert·e technique ne lui conférant ni la légitimité ni même l’accès à ces prérogatives réflexives. [^6]

C’est ainsi que les chercheur·ses de mon propre laboratoire, autant de personnes dont j’admire l’extrême intelligence mathématique et l’ensemble de la carrière, se trouvent incapables de rompre avec un domaine dont ils connaissent et reconnaissent la nocivité. Pour retrouver un semblant de sens, on évoque paradoxalement notre illégitimité ou notre incapacité à traîter les questions environnementales (“je préfère laisser les experts s’en charger”, comme si de tels experts existaient vraiment) ou alors la fatalité d’un système verrouillé (“notre équipe travaille sur ce domaine, on ne peut pas faire non plus n’importe quoi”).

Ce sentiment d’emprisonnement amène à des paradoxes proprement délirants, comme c’est le cas des chercheur·ses qui se réfugient dans une activité de recherche poussée volontairement dans une extrémité théorique qui assure qu’elle n’alimentera jamais l’industrie mortifère. En définitive, la société moderne assoit la domination de la Technique sur la Vie paradoxalement par le biais du travail d’une élite minoritaire qui parvient elle, parfois en dépit d’une forte dissonance cognitive, à maintenir son propre équilibre Technique-Vie assuré par la flamme du pouvoir d’agir spinoziste [^7] (et/ou par un attraitfétichiste pour la domination égoïste et l’argent [^8]).

La religion intelligence artificielle

Le cas particulier de l’intelligence artificielle illustre parfaitement mon propos. Suite aux quelques prouesses magiques dans les domaines de la vision assistée par ordinateur et du traitement du langage naturel, auxquelles s’ajoute la victoire jusque là considérée impossible de l’algorithme AlphaGo contre le champion du monde de Go, l’IA, et plus précisément les réseaux de neurones profonds, sont aujourd’hui vus comme un Eldorado, un couteau-suisse en mesure de résoudre tous les problèmes sur lesquels les humains se cassent les dents ou qu’ils n’ont pas les capacités calculatoires ou cognitives de traiter.

Mais comme aucune théorie mathématique n’est en mesure de craquer les mystères des réseaux de neurones profonds [^9] produits par des heures de calculs purement informatiques et très aléatoires (deux instanciations d’un même algorithme donneront lieu à deux réseaux de neurones absolument distincts), l’outil technique “IA” dépasse le contrôle humain, y compris le contrôle expert. C’est ainsi que naissent des situations aussi ubuesques que celle décrite en début d’article et qu’on voit se généraliser : les étudiants formés à l’intelligence artificielle n’ont aucun contrôle sur leur propre outil.

L’enseignement de l’IA tend d’ailleurs à renforcer l’illusion, la magie d’une machine omnipotente et qui nécessite peu d’efforts (aucune théorie mathématique profonde n’est nécessaire et des interfaces logicielles flexibles, telles que TensorFlow, permettent de devenir autonome en quelques heures).

Le triptyque citoyen-gouvernement-industrie aggrave le problème à ses dépends : afin de tenir la dragée haute aux GAFAM, le gouvernement français a récemment lancé un appel “Compétences et Métiers d’Avenir” à la massification des enseignements en IA, dont l’objectif est de doubler l’offre de formation des étudiants post-bac. S’il n’est pas incohérent de vouloir adapter les connaissances universitaires aux savoirs dernièrement acquis, il est important de rappeler que doubler l’offre en IA implique arithmétiquement la disparition d’autres formations, celles dès lors considérées obsolètes ou moins utiles.

C’est dans ce contexte que les designers de circuits électroniques ont disparu et que la béquille de l’“IA couteau-suisse” évoquée en début d’article tente très maladroitement de répondre à un problème mal posé [^10]. L’IA vide donc indirectement les savoirs et les savoir-faire élémentaires, imposant de fait un monopole radical par lequel l’outil numérique porteur des algorithmes de l’IA prend la charge des décisions précédemment établies par l’humain.

Et ce, sans contrôle possible par l’humain, qu’il soit ignorant, novice ou même expert en informatique. Le choix des populations, gouvernements et entreprises d’investir massivement dans l’IA est celui de la pillule bleu présentée à Néo dans le film Matrix : un point de non retour dans l’illusion d’un monde idéalisé contrôlé par des machines capables de tout solutionner, et entre autres de résoudre la dite crise climatique (qu’il serait plus sage pour réellement cerner les enjeux de correctement nommer extinction de masse ou effondrement du vivant).

L’IA ajoute par ailleurs une nouvelle pierre à l’édifice d’abrutissement de la population, dénoncé par Steigler dans La télécratie contre la Démocratie ou par Brighelli dans La fabrique du crétin, qui permet à chacun·e (dans la société occidentale du moins) de vivre dans un monde en apparence dépourvu de contraintes, de nécessité de savoir ou même de comprendre, et dépourvu de conflits, un monde aux ressources toujours supposées infinies par l’économie néo-classique [^11] sur laquelle se basent tous les gouvernements occidentaux. Le plus grand danger de l’IA apparait alors comme une évidence : en renforçant la promesse techno-solutionniste en direction d’une population privée de ses savoirs et savoir-faire, privation renforcée par une dépendance totale à des machines et à des décisions automatiques incontrôlables, l’IA masque un peu plus la réalité d’un système de technologies insoutenables et au bord de l’effondrement.

Ces technologies, que Monnin dans Héritage et fermeture qualifie de zombies, car elles sont en apparence vivantes (elles fonctionnent aujourd’hui et donnent l’impression de toujours pouvoir se développer demain) mais effectivement mortes (du fait de la déplétion matérielle, des pics de ressources énergétiques prochains, mais aussi de leur impact environnemental intenable et qui imposera des régulations fortes sur le moyen terme), sont amenées à disparaitre.

Dans le cas de l’IA, c’est par l’intermédiaire de l’impossible renouvellement de l’infrastructure numérique et de l’insoutenabilité de son coût énergétique que sa disparition s’opèrera assurément. En l’absence d’anticipation, l’ensemble des machines automatisées qui se substituent aujourd’hui à un savoir-faire initialement humain ne pourront plus être alimentées ou réparées, mettant à mal certaines activités essentielles. C’est le cas notamment des machines agricoles que l’industrie 4.0 promet de rendre plus “intelligentes”.

Atelier Paysan, dans son manifeste Reprendre la terre aux machines, alerte précisément sur ce point : le nombre de paysan·nes qui nourrissent une population grandissante n’a de cesse de diminuer 12 au profit de machines automatisées (tracteurs sans conducteur, drones, machines de manipulation des plants), détruisant les savoir-faire humains (les paysan·nes ne sont plus en contact avec la terre) et aggravant la capacité de résilience consécutive aux chocs pétroliers et énergétiques à venir. L’exemple de Cuba, documenté dans la vidéo Comment Cuba survécut en 1990 sans pétrole, et dont la population a dû transiter essentiellement du jour au lendemain d’une vie “à l’américaine” à un retour à la terre non préparé parce que dépourvu des communs élémentaires, permet d’anticiper, dans une ampleur qui sera vraisemblablement incomparable, les conséquences des pénuries énergétiques et matérielles mondiales à venir, dans un écosystème qui plus est en extinction accélérée.

Démanteler l’IA

Dans Héritage et fermeture, Monnin propose une théorie du démantellement des technologies zombies comme articulation nécessaire à la résilience d’une société contrainte en énergie, ressources, et devant s’adapter aux multiples conséquences systémiques de l’écocide en cours (à commencer par l’adaptation au réchauffement climatique). En plus d’être une technologie proprement anti-résilience et donc aux conséquences sociétales particulièrement nocives, l’intelligence artificielle est un parfait candidat à la mise en place du démantellement technologique et du rééquilibrage de la balance Vie-Technique.

En effet, en tant que brique supérieure et encore récente de la forteresse numérique, la perspective d’abandon de l’intelligence artificielle
comme outil constitue à la fois un imaginaire crédible (il y a peu nous vivions sans) et un objectif accessible (il s’agit “juste” de faire table rase de dix années de recherche et l’industrie dans le domaine). Dans l’analyse effectuée par le groupe lownum 13 portant sur la dite low-technicisation du numérique, une première étape dans la mise en place d’un démantellement consiste à identifier et mettre en regard le caractère nécessaire (ou non) de l’outil numérique et le caractère nécessaire (ou non) du service qu’il remplit. Notamment, du fait de sa criticité et du danger de résilience induit, tout outil numérique nécessaire à un besoin nécessaire (voire vital) doit être démantelé au plus vite.

C’est le cas par exemple de l’industrie 4.0 dans le domaine agricole qui,amenée à déposséder les paysan·nes devenu·es technicien·nes numériques de leur maîtrise même de l’agriculture, doit être rapidement décommissionnée. L’exemple de Cuba évoqué plus haut démontre de la même manière que l’intensification de la production d’intrants artificiels (qui exige une utilisation intensive de gaz) en remplacement des nitrates, phosphates et potassium naturels (excrétions humaines et des autres animaux, composts de biomasse) provoquera des famines massives en cas de discontinuité d’approvisionnement énergétique. Ces intrants artificiels, dont les conséquences écologiques sont par ailleurs désastreuses, engendrent au même titre que l’agriculture 4.0 un danger élevé de résilience.

Les technologies zombies de l’intelligence artificielle tendent à s’emparer de secteurs industriels en lien à des besoins vitaux ou assurant tout au moins le maintien de l’équilibre sociétal. C’est le cas notamment du transport de produits, au moyen de véhicules autonomes ou de drones. Un développement massif d’une telle transition logistique serait en mesure, pour des raisons économiques évidentes d’imposer un monopole radical sur les chaînes d’approvisionnement, en particulier alimentaire. Toute rupture soudaine de ces chaînes (pénurie de métaux rares, cyber-attaque) est en capacité de produire des famines si le parc de véhicules non autonomes et le nombre de conducteurs encore disponibles (l’expertise étant rapidement perdue) ne couvrent plus les besoins de distribution. Fort heureusement, en tant que technologie encore immature, la véhiculation autonome est un outil aisément démantelable.

Le cœur de l’outillage de l’intelligence artificielle n’est cependant jusque là pas encore dédié à des besoins indispensables. La vaste majorité des applications de l’IA (en volume d’usage) concerne le tri automatique d’emails, les moteurs de recherche, la traduction automatique, la reconnaissance faciale et d’objets, etc. Il convient pour ces usages non essentiels d’évaluer la pertinence sociétale du besoin au regard de l’intensité énergétique et de l’empreinte environnementale induites par l’ensemble du cycle de vie de l’outil (recherche, développement, commercialisation, usage, fin de vie).

En l’occurrence, les phases de recherche, développement et usages ne sont pas neutres. L’intensité énergétique, la plus simple à évaluer, et qu’il est ensuite facile de convertir en bilan équivalent carbone, a été récemment documentée 14 : l’ensemble de la re-
cherche et du développement des réseaux de neurones profonds s’accompagne d’une très forte consommation électrique, à l’image des dizaines de milliers de cœurs GPU nécessairement pour entrainer un mois durant les réseaux profonds les plus avancés, pour un coût estimé à plusieurs centaines de tonnes équivalent CO2 (rappelons que la consommation annuelle totale d’un·e français·e est de 10 tonnes équivalent CO2).

Évaluer l’impact de l’IA par le simple prisme du bilan carbone est néanmoins tout aussi réducteur que dangereux (car il incite à une fuite en avant dans le développement de nouvelles technologies plus “économes”, prototypiques de l’oxymorique “croissance verte”) : le développement explosif de l’IA se réalise en effet au prix de la production effrénée de puissants serveurs très consommateurs en énergie et métaux rares, qui s’accompagnent parfois de la construction sur site de centrales électriques dédiées, et surtout alimente la production matérielle de nouveaux et nombreux produits (notamment les milliards de dispositifs de l’Internet des objets) qui interpénètrent essentiellement tous les secteurs économiques et fabriquent de nouveaux besoins rapidement devenus nécessités. Au bilan, c’est une

augmentation annuelle de l’ordre de 9% de l’ensemble du domaine du numérique dont il s’agit, qui s’accompagne notamment d’une augmentation de 100% tous les 3,4 mois de l’intensité calculatoire requise pour l’entrainement des réseaux de neurones profonds 15. Face à l’urgence environnementale et à la nécessité par ailleurs d’un repli de la consommation énergétique fossile (bloquée à 85% du mix énergétique global depuis 30 ans en dépit du développement massif des énergies dites renouvelables) à hauteur de -7% par an, ces explosions de consommation liées au numérique et particulièrement à l’IA sont intenables. Dans ce contexte, les usages sociétalement
bien maigres de l’IA sont particulièrement indéfendables.

Le chantier de résilience de la société humaine, qui se devra de retrouver une forme de cosmologie plus animiste 16, décoloniale et solidaire, doit passer par un démantèlement progressif des technologies zombies (insoutenables, coloniales, induisant leurs propres besoins) et l’instauration – ou plus souvent la réinstauration – de technologies vivantes (lowtech, répondant à des besoins réels et aux contraintes de ressources, accessibles à toutes et tous).

Cet article est donc un appel aux chercheur·ses et ingénieur·es en informatique et en mathématique de faire tomber les premiers pans de ce chantier aussi vaste qu’indispensable en décommissionnant les investissements en intelligence artificielle et en recréant une base saine de communs, c’est-à-dire de savoirs et savoir-faire partagés et résilients.

Il s’agit de bifurquer, refuser, parfois désobéir en conscience, mais aussi justifier, expliquer et sensibiliser, autant de compétences précisément au cœur du travail scientifique et qui nous positionne de fait comme les actrices et acteurs les mieux armé·es pour engager une transition enthousiaste et constructive. Sous la pression de nos propres étudiants, les laboratoires de recherche et universités y sont désormais sensibles et déjà prêts pour certains à accueillir, sinon inciter, à une pensée du déraillement pour reprendre le titre de l’appel large d’Aurélien Barrau.

La première pièce du puzzle d’une société résiliente peut passer très concrètement par ce démantèlement organisé des illusions mortifères de l’intelligence artificielle en redonnant toute sa place à notre intelligence animale, sociale et sensible.

[^1]: Voir par exemple Par delà nature et culture
[^2]: L'ontologie naturaliste crée une entité nommée nature, cet ensemble indifférencié d’êtres vivants non humains, mis à la marge car supposée dépourvue de conscience. Dès lors, cette nature-objet insensible peut être puisée, extraite, transformée, détruite. Selon Haudricourt dans son article Domestication des animaux, culture des plantes et traitement d’autrui, cette vision du monde induit l’accès à des formes de violences dont la conséquence est l’ensemble des formes d’assujettissement et de destruction du vivant
[^3]: Voir La convivialité d’Illich
[^4]: À ce titre, voir l’intéressant point de vue de Malcolm Ferdinand dans son livre Une écologie décoloniale
[^5]: L’ensemble minéral des infrastructures routières, bâtiments, machines, véhicules, etc., et des déchets de ces produits, conçus depuis moins d’un siècle, a en effet une masse cumulée équivalente à cinq fois la masse du monde carbone du vivant.
[^6]: Ces considérations ont récemment ouvert un débat houleux sur la distinction entre scientifique et scientifique militant, qu’il conviendrait plus raisonnablement de repenser comme un enjeu de réhabilitation des scientifiques vu·es aujourd’hui comme des produits des surspécialisation et polarisation techniques vers leur statut historique de penseurs philosophes systémiques (tels Pythagore, Aristote, Descartes, ou encore Einstein).
[^7]: Il est d’ailleurs frappant que les développeurs et développeuses de technologies high-tech soient précisément ceux et celles qui utilisent le moins ces mêmes technologies (c’est notamment le cas des nombreux chercheur·ses en télécommunication que j’ai pu côtoyer, moi-même inclus, qui ont longtemps vécu sans smartphone).
[^8]: Ce dernier point est défendu par Dany Robert-Dufour dans son livre Baise ton prochain
[^9]: Il est assez clair chez les mathématiciens qu’un tel espoir de compréhension de la complexité de ces réseaux, basés sur des optimisations hautement non linéaires aux milliards de minima locaux, est absolument hors d’atteinte.
[^10]: En l’occurrence, il ne s’agit ni de vision, ni de langage et, qui plus est, un problème qui demanderait bien trop d’exemples d’apprentissage par des architectures validées par des humains. On oublie bien trop souvent au passage qu’un algorithme d’intelligence artificielle ne fonctionne que par le biais de millions d’exemples étiquetés et alimentés par des humains : la perte de l’expertise technique, ne serait-ce que pour identifier les paramètres pertinents et correctement les étiqueter, se traduit de fait par l’impossibilité mécanique de la mise en place d’un algorithme d’IA.se traduit de fait par l’impossibilité mécanique de la mise en place d’un algorithme d’IA.
[^11]: Considérée comme une non science par les mathématiciens et physiciens.
[^12]: De six millions en 1950 à 400 000 en 2021, avec un objectif gouvernemental sous-entendu de descendre ce chiffre à 200 000, en dépit de l’évaluation du Shift Project dans son Plan de Transformaion de l’Économie Française de la nécessité d’accroître ce nombre de 500 000 nouveaux paysans d’ici à 2030 (Atelier Paysan évalue quant à lui ce chiffre à un million d’agriculteur·rices supplémentaires).

[^14]: Voir par exemple l’article de Patterson et al., Carbon emissions and large neural network training, ou encore àl’échelle du numérique le travail de Freitag et al. The real climate and transformative impact of ICT : A critique of estimates, trends, and regulations. Le rapport grand public Lean ICT du Shift Projet est aussi un point d’entrée très exhaustif.
[^15]: Il y a dix ans, cette croissance était de 100% tous les deux ans.
[^16]: C’est-à-dire qui confère, comme la science l’établit aujourd’hui, une forme d’intériorité subjective (de conscience) aux autres êtres vivants, et qui place au centre des débats géopolitiques l’interdépendance forte entre les espèces (humaine et autres qu’humaine).

Permalink
February 5, 2024 at 8:42:46 PM GMT+1

The end of the social networkhttps://www.economist.com/leaders/2024/02/01/the-end-of-the-social-network

  • Social Network
  • Societal Collapse
  • Big Corpo
  • Social Network
  • Societal Collapse
  • Big Corpo

The end of the social network

The Economist

Editor’s note, February 2nd 2024: This article was updated after Meta announced earnings results for 2023.

Facebook may be turning 20 on February 4th, but it is just as much of a magnet for controversy and cash today as when it was a brash, break-everything teenager. On January 31st Mark Zuckerberg, the social network’s founder, was harangued by American senators over the spread of harmful material. The next day he announced another set of glittering results for Meta, Facebook’s parent company, which is now valued at $1.2trn. Yet even as social media reliably draw vast amounts of attention from addicts and critics alike, they are undergoing a profound but little-noticed transformation.

The weird magic of online social networks was to combine personal interactions with mass communication. Now this amalgam is splitting in two again. Status updates from friends have given way to videos from strangers that resemble a hyperactive tV. Public posting is increasingly migrating to closed groups, rather like email. What Mr Zuckerberg calls the digital “town square” is being rebuilt—and posing problems.

This matters, because social media are how people experience the internet. Facebook itself counts more than 3bn users. Social apps take up nearly half of mobile screen time, which in turn consumes more than a quarter of waking hours. They gobble up 40% more time than they did in 2020, as the world has gone online. As well as being fun, social media are the crucible of online debate and a catapult for political campaigns. In a year when half the world heads to the polls, politicians from Donald Trump to Narendra Modi will be busy online.

The striking feature of the new social media is that they are no longer very social. Inspired by TikTok, apps like Facebook increasingly serve a diet of clips selected by artificial intelligence according to a user’s viewing behaviour, not their social connections. Meanwhile, people are posting less. The share of Americans who say they enjoy documenting their life online has fallen from 40% to 28% since 2020. Debate is moving to closed platforms, such as WhatsApp and Telegram.

The lights have gone out in the town square. Social media have always been opaque, since every feed is different. But TikTok, a Chinese-owned video phenomenon, is a black box to researchers. Twitter, rebranded as X, has published some of its code but tightened access to data about which tweets are seen. Private messaging groups are often fully encrypted.

Some of the consequences of this are welcome. Political campaigners say they have to tone down their messages to win over private groups. A provocative post that attracts “likes” in the X bear pit may alienate the school parents’ WhatsApp group. Posts on messaging apps are ordered chronologically, not by an engagement-maximising algorithm, reducing the incentive to sensationalise. In particular, closed groups may be better for the mental health of teenagers, who struggled when their private lives were dissected in public.

In the hyperactive half of social media, behaviour-based algorithms will bring you posts from beyond your community. Social networks can still act as “echo chambers” of self-reinforcing material. But a feed that takes content from anywhere at least has the potential to spread the best ideas farthest.

Yet this new world of social-media brings its own problems. Messaging apps are largely unmoderated. For small groups, that is good: platforms should no more police direct messages than phone companies should monitor calls. In dictatorships encrypted chats save lives. But Telegram’s groups of 200,000 are more like unregulated broadcasts than conversations. Politicians in India have used WhatsApp to spread lies that would surely have been removed from an open network like Facebook.

As people move to closed groups, the open networks left behind are less useful because of the decline in public posting. During the covid-19 pandemic, scientists and doctors contributed to an online debate which contained real insight as well as misinformation. Open-source intelligence flowed when Russia invaded Ukraine. Today those conversations are disappearing or moving to closed channels, slowing the spread of ideas. The people still weighing in on the public networks, meanwhile, are disproportionately male and likely to describe themselves as very left- or right-wing: bores, in plain English.

What’s more, the open-network algorithms driven by users’ behaviour seem primed to spread the spiciest videos. For something to go viral on a social network, people had to choose to share it. Now they endorse it simply by watching, as the algorithm rewards content that attracts the most engagement. Deliberate curation has been replaced by a system that taps straight into the id. Provocateurs like Mr Trump or Nayib Bukele, the favourite in this week’s election in El Salvador, stand to benefit, as do misinformation merchants. Platforms say they are better at weeding out fakes. Taylor Swift, the latest high-profile victim of a deepfake, might disagree.

More urgent even than the rise of fake news is a lack of the real sort. Mr Zuckerberg once said he wanted Facebook to be like a personalised newspaper. But since the network’s pivot to entertainment, news makes up only 3% of what people see on it. Across social media only 19% of adults share news stories weekly, down from 26% in 2018. Publications like BuzzFeed News, which relied on social distribution, have perished. That is their lookout (and ours). But it is everyone’s problem when nearly half of young people say that, just as the platforms decide news is no longer interesting, social media are their main source of news.

WhatsApp and WhatsDown

Some people argue that social networks’ defects can be fixed by better governance, clever coding or a different business model. Such things can help. But the problems raised by the new generation of apps suggest that social media’s flaws are also the result of the trade-offs built into human communication. When platforms swing back towards private groups, they inevitably have less oversight. When people escape their echo chambers, they may well face more extreme content. When users embrace harmless entertainment, they see less news. As social networks wither, platform operators and users should devote less time to the old battles and more to grappling with the new.

Permalink
February 5, 2024 at 8:27:15 PM GMT+1

Bunkers et fermes bio : comment les ultrariches préparent l’avenirhttps://reporterre.net/Bunkers-et-fermes-bio-comment-les-riches-preparent-l-avenir

  • Big Corpo
  • Tech Guru
  • Ecology
  • Societal Collapse
  • Big Corpo
  • Tech Guru
  • Ecology
  • Societal Collapse

Bunkers et fermes bio : comment les ultrariches préparent l’avenir

Face à la crise climatique, le survivalisme est devenu l’ultime lubie des milliardaires qui planifient leur exode et tentent d’échapper au monde commun.

Vous lisez la première partie de l’enquête Le séparatisme climatique des ultrariches.


Les ultrariches se préparent à la catastrophe qu’ils ont eux-mêmes contribué à provoquer. Alors que le monde plonge peu à peu dans le chaos climatique, ils se bâtissent des bunkers, s’achètent des îles privées et s’aménagent des existences qu’ils rêvent à l’abri des violences, pour continuer à vivre dans le luxe, le calme et la volupté.

Voilà déjà quelques années que les médias ont révélé le phénomène. La pandémie de Covid-19 et la guerre en Ukraine ont tout accéléré. Avec leurs jets privés, leurs yachts et leurs « gated communities » (quartiers résidentiels privés), les milliardaires ont décidé de faire sécession.

La Nouvelle-Zélande et la Patagonie sont devenues leurs repaires. Ils investissent aussi dans des contrées isolées comme l’Alaska, la Scandinavie ou le Kamtchatka en Russie. Steve Huffman, le cofondateur du réseau social Reddit, le dit lui-même : une bonne moitié de ses collègues milliardaires de la Silicon Valley ont acheté des « biens immobiliers apocalyptiques » dans le monde. Les médias anglo-saxons évoquent même « un rush ».

Des jets privés prêts à décoller pour la Nouvelle-Zélande

Le dernier en date à avoir défrayé la chronique est le fondateur de Facebook Mark Zuckerberg. Son bunker de luxe à Hawaï, baptisé « Koolau Ranch », devrait coûter plus de 250 millions d’euros. L’abri de 464 m2, protégé par une porte blindée et un système de vidéosurveillance hors norme, est capable de se fournir lui-même en électricité ainsi qu’en eau grâce à une gigantesque citerne. Une ferme biologique de 27 hectares lui est associée pour pouvoir vivre en autarcie. Au bord du Pacifique, le milliardaire attend la fin du monde en écoutant le bruit des vagues.

À une dizaine de milliers de kilomètres de là, en Nouvelle-Zélande, l’engouement des ultrariches est tel que les autorités sont débordées. Elles ont créé un visa spécial pour les grandes fortunes qui souhaitent s’y implanter. Le visa Investor Plus exige un investissement minimum de 6,7 millions de dollars (6,2 millions d’euros) sur trois ans. Peter Thiel, le cofondateur milliardaire de Paypal, a obtenu la nationalité néo-zélandaise après n’avoir passé que douze jours dans le pays. De nombreuses villas se sont construites sur des terrains en pleine nature, avec des pistes d’atterrissage et des fermes.

Le milliardaire Julian Robertson est propriétaire d’un manoir surplombant le lac Wakapitu. Le président de Fidelity National Financial Inc. Bill Foley possède une propriété dans la région de Wairarapa. James Cameron, le réalisateur de Titanic, a acheté un immense manoir qui domine le lac Pounui.

L’attrait pour cette île s’explique par sa stabilité politique et son éloignement géographique. Pour la rejoindre en cas de crise, cinq dirigeants d’entreprises de la Silicon Valley possèdent un énorme jet privé caché dans un hangar du Nevada. L’avion est prêt à décoller à n’importe quel moment avec leur famille (et celle du pilote !). Son prix est estimé à 61 millions de dollars (56,2 millions d’euros) et sa maintenance à un million de dollars par an, selon [le média Bloomberg](https://www.bloomberg.com/features/2018-rich-new-zealand-doomsday-preppers/?leadSource=uverify wall).

« Accaparement de la terre par les ultrariches »

Les journalistes racontent aussi comment des bunkers de survie de 150 tonnes sont construits en secret au Texas avant d’être acheminés par bateau jusqu’à la Nouvelle-Zélande. Ils y sont enfouis à trois mètres de profondeur, loin des regards et avec la complicité de la douane.

Pour le chercheur en sciences politiques, Édouard Morena, on assiste à une nouvelle forme d’« accaparement de la terre par les ultrariches ». En Écosse, 67 % des terres rurales écossaises appartiennent à 0,0025 % de la population. « Les ultrariches investissent dans de grandes propriétés pour cultiver l’entre-soi et se préserver du reste de la planète mais aussi parce qu’ils estiment que dans un futur proche, ces terres constitueront une formidable richesse », dit-il à Reporterre.

Premiers propriétaires terriens d’Argentine

En Patagonie, la situation est frappante. Ted Turner, le boss de la chaîne de la télévision CNN, s’est offert 5 000 hectares ; Amalita Lacroze de Fortabat, « la reine du ciment » s’est construit une villa en face du lac Nahuel Huapi et a embauché des hommes de main pour bloquer l’accès aux habitants locaux. La pampa s’est hérissée de barbelés.

Le milliardaire Joe Lewis, qui a fait fortune dans les Hard Rock Cafe, a racheté une superficie grande comme Paris et a même détourné le cours d’une rivière. À quelques kilomètres de là, Sylvester Stallone s’est offert un lac et 14 000 hectares à proximité d’El Bolson. George Soros a, quant à lui, acheté un domaine de 350 000 hectares et a même été le premier propriétaire terrien argentin avant d’être détrôné par la famille Benetton qui possèdent aujourd’hui 900 000 hectares du côté d’Esquel.

« Les ultrariches rêvent d’être seuls »

« Les ultrariches rêvent d’être seuls et de se réfugier dans leur château. » Dans un best-seller non traduit, intitulé La survie des plus riches, les fantasmes d’évasion des millionnaires de la tech (notre traduction) l’intellectuel Douglas Rushkoff analyse l’attrait des milliardaires pour l’effondrement. Il raconte dans le Guardian sa rencontre lunaire avec cinq milliardaires technophiles qui lui expliquent leur plan pour faire face à « l’événement » — l’euphémisme qu’ils utilisent pour parler de la crise climatique.

Invité à donner une conférence sur les nouvelles technologies, il se retrouve à répondre à une myriade de questions des milliardaires : combien de temps faut-il prévoir pour survivre sans aide extérieure ? Un abri doit-il avoir sa propre alimentation en air ? Quelle est la probabilité de contamination des eaux souterraines ? Le PDG d’une maison de courtage qui vient de terminer la construction d’un bunker souterrain lui demande aussi comment maintenir l’autorité sur sa force de sécurité.

L’intellectuel, déboussolé, tente de les raisonner et de leur dire de ne pas investir uniquement dans « des munitions » ou « des clôtures électriques ». Mais les milliardaires s’entêtent. Ils envisagent d’utiliser des serrures à combinaison spéciale pour l’approvisionnement en nourriture qu’ils seraient les seuls à connaître. Ou de faire porter aux gardes une sorte de collier disciplinaire en échange de leur survie. Ou encore de construire des robots qui serviraient de gardien et de travailleurs — si cette technologie pouvait être développée « à temps ».

Un tournant libertarien

Pour Douglas Rushkoff, « leur extrême richesse et leurs privilèges les ont rendus obsédés par l’idée de s’isoler du danger réel, écrit-il. Ils veulent s’échapper d’un monde qu’ils ont rendu invivable. »

Ce rêve d’évasion est assez inédit dans l’histoire, même si pendant la guerre froide, les riches se construisaient déjà des bunkers pour survivre à un hiver nucléaire. Dans une interview à L’Obs, le philosophe Bruno Latour estimait que la bascule a eu lieu dans les années 1980 et 1990. C’est à partir de ce moment que les riches auraient suivi « un processus de radicalisation solitaire ».

En 1997, au sein de la Silicon Valley, un manifeste technolibertaire est sur toutes les lèvres : The Sovereign Individual : How to Survive and Thrive During the Collapse of the Welfare State (éd. Simon & Schuster), que l’on peut traduire par L’individu souverain : comment survivre et prospérer durant l’effondrement de l’État-providence, de James Dale Davidson et William Rees-Mogg. Il prédit que la faillite de l’État-nation est inéluctable et que seule une « aristocratie » et « une élite cognitive » survivra. Pour les auteurs, « le développement technologique est devenu moins une histoire d’épanouissement collectif que de survie personnelle ». Le progrès pour tous n’est plus à atteindre. Le projet est de bâtir une forteresse pour une minorité de riches.

« Les membres les plus astucieux des classes dominantes ont alors compris que la globalisation n’était pas soutenable écologiquement, raconte Bruno Latour. Au lieu de changer de modèle économique, ils ont décidé de renoncer à l’idée d’un monde commun. » Autrement dit, « la classe dominante s’est immunisée contre la question écologique en se coupant du monde ».

Si elle a si facilement adhéré aux thèses de l’effondrement, c’est aussi parce que cette idée ne la menaçait pas directement. « L’effondrement est une façon de contourner une critique frontale du capitalisme et des rapports de classe, elle ne cible pas en priorité les riches mais nous invite à tous changer d’attitudes et de pratiques », estime le chercheur Edouard Morena. D’où sa popularité dans les hautes sphères.

L’ère du « misanthropocène »

L’idéologie séparatiste n’a cessé, depuis, de croître. Le sociologue Nicolaj Schultz parle même de « misanthropocène » pour évoquer notre époque. « Les maîtres du monde se préparent à nous abandonner comme des miettes sur une assiette », écrit-il.

On retrouve cette idée d’exode poussée à son paroxysme dans les fantasmes extraplanétaires d’un Elon Musk ou d’un Jeff Bezos qui rêvent de « terratransformer » Mars ou de bâtir des colonies géantes flottant dans le système solaire.

Puisqu’il n’est, pour l’instant, pas très sympathique de vivre dans l’espace, les ultrariches passent de « la planète B » au « plan B ». À défaut de pouvoir coloniser Mars, Mercure ou la Lune, ils s’assurent de posséder sur cette planète un luxueux refuge-bunker résistant aux aléas climatiques. Après moi, le déluge.

Permalink
January 19, 2024 at 9:04:59 PM GMT+1

The Teen Mental Illness Epidemic is International: The Anglospherehttps://www.afterbabel.com/p/international-mental-illness-part-one

  • Social Network
  • Societal Collapse
  • Social Network
  • Societal Collapse

It is now widely accepted that an epidemic of mental illness began among American teens in the early 2010s. What caused it? Many commentators point to events in the USA around that time, such as a particularly horrific school shooting in 2012. But if the epidemic started in many nations at the same time, then such country-specific theories would not work. We’d need to find a global event or trend, and the 2008 Global Financial Crisis doesn’t match the timing at all, as Jean Twenge and I have shown.

In our 2018 book The Coddling of the American Mind, Greg Lukianoff and I presented evidence that the same trends were happening in Canada and the United Kingdom—not just the rise in depression and anxiety, but also the overprotection of children, the rise of “safetyism,” and the shouting down of speakers on university campuses when students deemed the speaker to be “harmful.” It seemed that all the Anglo nations were setting up their children for failure in the same ways at the same time.

In 2019 I went on a speaking tour of Australia and New Zealand (thanks to Think Inc) and discovered that the same trends were coming, more slowly, to the Southern hemisphere Anglo nations too. I then created two new collaborative review documents, one for Australia, and one for New Zealand, to gather all the empirical studies and journalistic accounts I could find.

In early 2020, just as COVID swept in, I hired Zach Rausch as a research assistant to help me handle these collaborative review docs. I connected with Zach, who was finishing his master’s degree in Psychological Science at SUNY New Paltz, after reading his blog post about his research related to The Coddling. Zach and I created many more collaborative review docs for many more countries and topics, which you can find (and add to) on this page.

In 2021, as I was beginning to work on the Babel project, I told Zach that it was urgent that we figure out just how international the mental illness epidemic was. Is it just happening in the five countries of what is sometimes called The Anglosphere? Is it all Western countries? Is it happening everywhere? Go figure it out and report back to me.

The rest of this post is Part 1 of Zach’s report, in his voice. I think his findings are momentous and should cause an immediate global rethinking of what children need to have a healthy childhood, and what obstacles to development arrived around the globe around 2012.

Permalink
January 14, 2024 at 4:30:18 PM GMT+1

The Teen Mental Illness Epidemic is International, Part 2: The Nordic Nationshttps://www.afterbabel.com/p/international-mental-illness-part-two

  • Social Network
  • Societal Collapse
  • Big Corpo
  • Social Network
  • Societal Collapse
  • Big Corpo

Today we have Zach’s second post on how teen mental health is changing around the world. In his last post, Zach showed that there is a four-part pattern in the data from all five of the main “Anglosphere” countries. Today he presents the studies he has collected on teen mental health in the five Nordic nations (the three Scandinavian countries of Sweden, Denmark, and Norway, plus Finland and Iceland).

The Nordic nations differ in many ways from the Anglosphere countries. In particular, they have lower levels of some of the social pathologies that some have said might explain the rising levels of mental illness in the USA such as income inequality, a hyper-competitive neoliberal economy, school shootings, and other gun violence. The Nordic countries also seem to do a lot less of the “coddling” and paranoid overprotection that is rampant in the USA, Canada, and the UK (though not so common down under).

And yet, as you’ll see, the basic pattern largely holds. I think these two posts from Zach are tremendously important for they show us that the teen mental illness epidemic is international. We need to be looking for causes that can explain the international pattern, even as we know there are also additional causes and cultural interactions specific to each country.

Permalink
January 14, 2024 at 4:00:26 PM GMT+1

SOLSTICE - 5 - YouTubehttps://www.youtube.com/watch?v=cntb3wcZdTw

  • Videos
  • Videos

In a race for domination, the Continental Alliance's ambition leads to the uninhibited exploitation of planet Solstice-5. As they construct unstoppable autonomous factories, the Alliance's insatiable quest for power leaves behind a forsaken workforce in a world spiraling out of control.

Permalink
January 9, 2024 at 9:49:23 PM GMT+1

Pluralistic: “If buying isn’t owning, piracy isn’t stealing” (08 Dec 2023) – Pluralistic: Daily links from Cory Doctorowhttps://pluralistic.net/2023/12/08/playstationed/#tyler-james-hill

  • Enshitification
  • Big Corpo
  • Planned Obsolescence
  • Enshitification
  • Big Corpo
  • Planned Obsolescence

Pluralistic: "If buying isn't owning, piracy isn't stealing" (08 Dec 2023)

20 years ago, I got in a (friendly) public spat with Chris Anderson, who was then the editor in chief of Wired. I'd publicly noted my disappointment with glowing Wired reviews of DRM-encumbered digital devices, prompting Anderson to call me unrealistic for expecting the magazine to condemn gadgets for their DRM:

https://longtail.typepad.com/the_long_tail/2004/12/is_drm_evil.html

I replied in public, telling him that he'd misunderstood. This wasn't an issue of ideological purity – it was about good reviewing practice. Wired was telling readers to buy a product because it had features x, y and z, but at any time in the future, without warning, without recourse, the vendor could switch off any of those features:

https://memex.craphound.com/2004/12/29/cory-responds-to-wired-editor-on-drm/

I proposed that all Wired endorsements for DRM-encumbered products should come with this disclaimer:

WARNING: THIS DEVICE’S FEATURES ARE SUBJECT TO REVOCATION WITHOUT NOTICE, ACCORDING TO TERMS SET OUT IN SECRET NEGOTIATIONS. YOUR INVESTMENT IS CONTINGENT ON THE GOODWILL OF THE WORLD’S MOST PARANOID, TECHNOPHOBIC ENTERTAINMENT EXECS. THIS DEVICE AND DEVICES LIKE IT ARE TYPICALLY USED TO CHARGE YOU FOR THINGS YOU USED TO GET FOR FREE — BE SURE TO FACTOR IN THE PRICE OF BUYING ALL YOUR MEDIA OVER AND OVER AGAIN. AT NO TIME IN HISTORY HAS ANY ENTERTAINMENT COMPANY GOTTEN A SWEET DEAL LIKE THIS FROM THE ELECTRONICS PEOPLE, BUT THIS TIME THEY’RE GETTING A TOTAL WALK. HERE, PUT THIS IN YOUR MOUTH, IT’LL MUFFLE YOUR WHIMPERS.

Wired didn't take me up on this suggestion.

But I was right. The ability to change features, prices, and availability of things you've already paid for is a powerful temptation to corporations. Inkjet printers were always a sleazy business, but once these printers got directly connected to the internet, companies like HP started pushing out "security updates" that modified your printer to make it reject the third-party ink you'd paid for:

See also : Ink stained wretches battle soul digital freedom taking place inside your printer

Now, this scam wouldn't work if you could just put things back the way they were before the "update," which is where the DRM comes in. A thicket of IP laws make reverse-engineering DRM-encumbered products into a felony. Combine always-on network access with indiscriminate criminalization of user modification, and the enshittification will follow, as surely as night follows day.

This is the root of all the right to repair shenanigans. Sure, companies withhold access to diagnostic codes and parts, but codes can be extracted and parts can be cloned. The real teeth in blocking repair comes from the law, not the tech. The company that makes McDonald's wildly unreliable McFlurry machines makes a fortune charging franchisees to fix these eternally broken appliances. When a third party threatened this racket by reverse-engineering the DRM that blocked independent repair, they got buried in legal threats:

See also : https://pluralistic.net/2021/04/20/euthanize-rentier-enablers/#cold-war

Everybody loves this racket. In Poland, a team of security researchers at the OhMyHack conference just presented their teardown of the anti-repair features in NEWAG Impuls locomotives. NEWAG boobytrapped their trains to try and detect if they've been independently serviced, and to respond to any unauthorized repairs by bricking themselves:

See also : https://mamot.fr/@q3k@hackerspace.pl/111528162905209453

Poland is part of the EU, meaning that they are required to uphold the provisions of the 2001 EU Copyright Directive, including Article 6, which bans this kind of reverse-engineering. The researchers are planning to present their work again at the Chaos Communications Congress in Hamburg this month – Germany is also a party to the EUCD. The threat to researchers from presenting this work is real – but so is the threat to conferences that host them:

See also : https://www.cnet.com/tech/services-and-software/researchers-face-legal-threats-over-sdmi-hack/

20 years ago, Chris Anderson told me that it was unrealistic to expect tech companies to refuse demands for DRM from the entertainment companies whose media they hoped to play. My argument – then and now – was that any tech company that sells you a gadget that can have its features revoked is defrauding you. You're paying for x, y and z – and if they are contractually required to remove x and y on demand, they are selling you something that you can't rely on, without making that clear to you.

But it's worse than that. When a tech company designs a device for remote, irreversible, nonconsensual downgrades, they invite both external and internal parties to demand those downgrades. Like Pavel Chekov says, a phaser on the bridge in Act I is going to go off by Act III. Selling a product that can be remotely, irreversibly, nonconsensually downgraded inevitably results in the worst person at the product-planning meeting proposing to do so. The fact that there are no penalties for doing so makes it impossible for the better people in that meeting to win the ensuing argument, leading to the moral injury of seeing a product you care about reduced to a pile of shit:

See also : https://pluralistic.net/2023/11/25/moral-injury/#enshittification

But even if everyone at that table is a swell egg who wouldn't dream of enshittifying the product, the existence of a remote, irreversible, nonconsensual downgrade feature makes the product vulnerable to external actors who will demand that it be used. Back in 2022, Adobe informed its customers that it had lost its deal to include Pantone colors in Photoshop, Illustrator and other "software as a service" packages. As a result, users would now have to start paying a monthly fee to see their own, completed images. Fail to pay the fee and all the Pantone-coded pixels in your artwork would just show up as black:

See also : https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process

Adobe blamed this on Pantone, and there was lots of speculation about what had happened. Had Pantone jacked up its price to Adobe, so Adobe passed the price on to its users in the hopes of embarrassing Pantone? Who knows? Who can know? That's the point: you invested in Photoshop, you spent money and time creating images with it, but you have no way to know whether or how you'll be able to access those images in the future. Those terms can change at any time, and if you don't like it, you can go fuck yourself.

These companies are all run by CEOs who got their MBAs at Darth Vader University, where the first lesson is "I have altered the deal, pray I don't alter it further." Adobe chose to design its software so it would be vulnerable to this kind of demand, and then its customers paid for that choice. Sure, Pantone are dicks, but this is Adobe's fault. They stuck a KICK ME sign to your back, and Pantone obliged.

This keeps happening and it's gonna keep happening. Last week, Playstation owners who'd bought (or "bought") Warner TV shows got messages telling them that Warner had walked away from its deal to sell videos through the Playstation store, and so all the videos they'd paid for were going to be deleted forever. They wouldn't even get refunds (to be clear, refunds would also be bullshit – when I was a bookseller, I didn't get to break into your house and steal the books I'd sold you, not even if I left some cash on your kitchen table).

Sure, Warner is an unbelievably shitty company run by the single most guillotineable executive in all of Southern California, the loathsome David Zaslav, who oversaw the merger of Warner with Discovery. Zaslav is the creep who figured out that he could make more money cancelling completed movies and TV shows and taking a tax writeoff than he stood to make by releasing them:

See also : https://aftermath.site/there-is-no-piracy-without-ownership

Imagine putting years of your life into making a program – showing up on set at 5AM and leaving your kids to get their own breakfast, performing stunts that could maim or kill you, working 16-hour days during the acute phase of the covid pandemic and driving home in the night, only to have this absolute turd of a man delete the program before anyone could see it, forever, to get a minor tax advantage. Talk about moral injury!

But without Sony's complicity in designing a remote, irreversible, nonconsensual downgrade feature into the Playstation, Zaslav's war on art and creative workers would be limited to material that hadn't been released yet. Thanks to Sony's awful choices, David Zaslav can break into your house, steal your movies – and he doesn't even have to leave a twenty on your kitchen table.

The point here – the point I made 20 years ago to Chris Anderson – is that this is the foreseeable, inevitable result of designing devices for remote, irreversible, nonconsensual downgrades. Anyone who was paying attention should have figured that out in the GW Bush administration. Anyone who does this today? Absolute flaming garbage.

Sure, Zaslav deserves to be staked out over an anthill and slathered in high-fructose corn syrup. But save the next anthill for the Sony exec who shipped a product that would let Zaslav come into your home and rob you. That piece of shit knew what they were doing and they did it anyway. Fuck them. Sideways. With a brick.

Meanwhile, the studios keep making the case for stealing movies rather than paying for them. As Tyler James Hill wrote: "If buying isn't owning, piracy isn't stealing":

See also : https://bsky.app/profile/tylerjameshill.bsky.social/post/3kflw2lvam42n

Permalink
December 10, 2023 at 4:06:07 PM GMT+1

Médias : les premières expériences 100 % IA | Les Echoshttps://www.lesechos.fr/tech-medias/medias/ces-sites-dinformations-deja-ecrits-par-des-ia-2038642

  • Artificial intelligence
  • Enshitification
  • Press
  • Artificial intelligence
  • Enshitification
  • Press

Médias : les premières expériences 100 % IA

Certains sites expérimentent l'utilisation de robots conversationnels pour écrire tout leur contenu et se revendiquent comme tels. A l'inverse, d'autres le cachent, provoquant de l'inquiétude.

Par Marina Alcaraz

Publié le 1 déc. 2023 à 13:43Mis à jour le 6 déc. 2023 à 17:59

Dans la présentation du site, il y a leurs parcours, leurs centres d'intérêt, leurs photos… Ils ont un style bien à eux, et des couvertures de sujets spécifiques. Carine Gravée, Vianney Garet, Nina Gavetière ont des noms, mais ne sont pas réels : ce sont des journalistes-robots créés de toutes pièces.

Certains sites réalisés entièrement par l'intelligence artificielle (IA) commencent à apparaître. Certes, pour l'heure, les initiatives restent limitées. Mais certains tentent de créer une niche, d'expérimenter un nouveau mode de création des contenus, en le revendiquant clairement… ou pas.

Magazine en kiosque

C'est par exemple le cas du magazine (papier et Web) « LHC - Les Heures Claires », qui se présente comme le premier magazine français généré à 99 % par l'IA, lancé il y a quelques semaines. Comme un support classique, il comporte des actualités, des interviews (avec des questions posées par un robot), des pages consacrées à la mode ou un horoscope.

A la manoeuvre, Rémy Rostan, ancien photographe. « Je suis toujours assez surpris par ce que propose ChatGPT », avoue-t-il. Le magazine sponsorisé par Easy Partner, cabinet de recrutement spécialisé dans le numérique, a vocation à être lancé en kiosque au printemps avec une fréquence mensuelle. « Je vise la communauté des technophiles et des curieux », explique Rémy Rostan, qui espère atteindre les 20.000 ventes.

Autres exemples : Tech Generation et Cuisine Generation, lancés au printemps par un consultant spécialisé en innovation chez Viseo, Ari Kouts. Il a connecté le site tech avec différents journaux spécialisés dans ce secteur, comme TechCrunch. Chaque « journaliste » fictif (qui a son style bien à lui) reprend donc des articles de presse sur des sujets d'actualité (la crise chez OpenAI, des déclarations de Musk…), les réécrit en donnant la source et ce, sans aucune intervention humaine. Au final, quelques incohérences, des maladresses mais des articles qui ressemblent à certains billets de blogs.

Dans la cuisine, les « chefs » imaginent plusieurs recettes « et bon nombre sont plausibles et même bonnes, même si les temps de cuisson sont approximatifs », estime Ari Kouts. C'est plus à titre d'expérience que le consultant a lancé ces « médias », sans volonté de les monétiser. « Cela permet aussi de rappeler l'intérêt de l'analyse, de l'enquête journalistique que des robots ne peuvent pas faire », assure-t-il.

Les deux sites ont une petite audience (autour de 3.000 visites par mois) et ressortent quelquefois dans Google News ! Même si la probabilité est faible, dans ce cas, puisqu'il s'agit d'une expérimentation un peu comme un jeu, « les sources primaires pourraient empêcher ce type de pratiques en invoquant le parasitisme, c'est-à-dire s'approprier la valeur d'un article », indique Julien Guinot-Deléry, avocat chez Gide.

Craintes des professionnels

Mais il existe aussi des sites dont le mode de production a été passé sous silence. « Dans un groupe de travail de la Commission paritaire des publications et agences de presse, la crainte qu'il y ait des sites avec une forte composante d'IA a été évoquée », dit un professionnel. « On a tous ce risque en tête », appuie Pierre Pétillault, directeur de l'Alliance de la presse d'information générale.

Dans une étude récente, Newsguard a identifié foule de sites avec des articles réécrits avec l'IA (presque 600 à fin novembre !), sans supervision humaine. Et dans nombre de cas, ils bénéficient de publicité programmatique. Aux Etats-Unis, « Sports Illustrated » ou « TheStreet » (Arena Group) ont été pointés par une enquête du média Futurism. Des articles auraient été écrits par des IA et de faux profils de journalistes créés (avec les images achetées sur un site proposant des photos générées par IA), ce qui a provoqué la colère des journalistes. Le groupe de médias s'est défendu, indiquant avoir acheté certains papiers à une agence.

Permalink
December 10, 2023 at 3:51:38 PM GMT+1

Arthur Keller : « Notre civilisation convertit la nature en déchets »https://lareleveetlapeste.fr/arthur-keller-notre-civilisation-est-une-machine-qui-convertit-la-nature-en-dechets/

  • Mass Consumption
  • Economy and markets
  • Societal Collapse
  • Ecology
  • Mass Consumption
  • Economy and markets
  • Societal Collapse
  • Ecology

Arthur Keller : « Notre civilisation est une machine qui convertit la nature en déchets »

"On nous persuade que le bonheur découlerait d’une somme de petits plaisirs et que le plaisir proviendrait de la consommation et de l’accumulation, alors qu’il n’est pas de vrai bonheur sans la fierté de se battre pour les valeurs qu’on sait bonnes et sans la sagesse de l’autolimitation."

Texte: Laurie Debove

Arthur Keller est un spécialiste des risques systémiques qui pèsent sur les sociétés humaines, des vulnérabilités de ces dernières face à ces risques, et des stratégies de transformation collective, dans une optique de résilience. Alors que l’humanité dépasse de plus en plus de limites planétaires, nous l’avons interrogé pour savoir comment transformer nos sociétés afin qu’elles deviennent plus justes et pérennes.

Pour Arthur Keller, notre planète est atteinte d’un cancer généralisé

Arthur Keller : « J’utilise souvent une analogie que j’ai développée pour illustrer l’erreur méthodologique fondamentale que nous commettons dans notre approche générale des enjeux liés au dépassement écologique planétaire : la métaphore du cancer généralisé. C’est un dérèglement systémique du corps, qui se traduit en de multiples symptômes : maux de tête, problèmes de peau et de digestion, par exemple. Pour chacun de ces symptômes, il existe des remèdes, ça peut être de l’ibuprofène, une pommade, un antispasmodique. Pourtant, la somme de ces « solutions » ne guérit pas la maladie.

Pour chaque crise, des experts préconisent des solutions, et l’on s’imagine que la somme de ces solutions pourrait nous sauver. Hélas le compartimentage en champs d’expertise masque la réalité de la maladie : notre civilisation est une machine qui convertit la nature en déchets.

Ces derniers sont solides, liquides ou gazeux ; parmi les gaz, certains détraquent le climat. Le changement climatique, à l’instar des autres crises, n’est qu’un symptôme de la maladie. Et notre obstination à nous attaquer aux conséquences sans remettre en question les causes explique selon moi notre échec constaté jusqu’ici.

LR&LP : Selon une étude publiée le 31 mai dans la revue Nature, sept des huit limites assurant la stabilité et la bonne santé du système planétaire ont déjà été dépassées. Quelles sont-elles ?

Arthur Keller : Cette étude est intéressante parce qu’elle porte sur le système dynamique complexe qui est constitué du système Terre (lithosphère, cryosphère, atmosphère, biosphère et pédosphère) et de l’anthroposphère (la sphère englobant l’humanité, ses sociétés et ses activités). Dans le cadre des limites planétaires, on n’était que sur le système Terre ; ici l’on incorpore les sciences humaines et sociales, comme dans le concept d’économie du donut de Kate Raworth.

En 2009, une équipe internationale de scientifiques a déterminé 9 seuils à ne pas dépasser pour préserver une planète Terre habitable pour l’humanité. Le seuil de stabilité de la machine climatique a été dépassé, nous sommes donc entrés dans une phase transitoire entre un état climatique stable et un autre, qui n’adviendra probablement pas avant plusieurs milliers ou dizaines de milliers d’années en raison notamment de la cinétique propre à la cryosphère. Jusqu’à nouvel ordre, nous allons donc devoir faire avec un climat fortement instable.

Par ailleurs, une nouvelle extinction de masse est enclenchée, due aux activités humaines, activités agricoles et forestières au premier plan. Les pollutions sont rejetées dans les milieux naturels si vite qu’elles s’accumulent et rendent les habitats naturels toxiques. Les cycles biogéochimiques, grands cycles du vivant, sont déréglés, notamment le phosphore et l’azote, ainsi que le cycle de l’eau.

Une autre limite est proche : celle qui concerne l’acidification des océans. Quant à la limite liée aux aérosols dans l’atmosphère, nous sommes encore en deçà du seuil fatidique. La dernière actuellement identifiée a trait au trou dans la couche d’ozone : c’est l’unique domaine dans lequel la situation progresse dans la bonne direction. Au final, l’humanité a d’ores et déjà franchi 6 des 9 limites qu’il ne faut pas franchir, selon les scientifiques, afin de garantir l’habitabilité de la Terre.

Dans l‘étude parue fin mai, il ne s’agit pas tout à fait des mêmes 9 limites mais d’un sous-ensemble. Ces 8 limites-là, définies comme « sûres et justes », intègrent à la fois des données scientifiques et des métriques sociales. Et ce que dit la publication, c’est que 7 de ces 8 limites ont déjà été dépassées.

Même sans alarmisme, la situation est alarmante.

LR&LP : Certains appellent à « Changer de civilisation », est-ce possible et la solution ?\

Arthur Keller : C’est indispensable ! Hélas ce n’est pas pour cela que c’est possible. Je ne crois malheureusement pas une seconde à notre capacité collective à organiser la métamorphose nécessaire. Le système s’auto-entretient, je suis convaincu qu’il le fera jusqu’à s’autodétruire, entraînant avec lui une grande partie de la biosphère.

On ne peut pas durablement prélever plus de ressources que ce que le système Terre peut produire, de même on ne peut continuer de détruire plus vite que la capacité d’autoréparation. C’est pour cela qu’on va vivre, dans les prochaines années et décennies, une grande descente énergétique et matérielle agrémentée de raréfactions et de pénuries conflictogènes.

Cette descente induira forcément une contraction économique, car la croissance de l’économie est fortement corrélée aux flux d’énergie et de matières premières. C’est inéluctable. Et échouer à planifier et piloter cette contraction sous forme de politiques de décroissance nous condamnera à la subir dans le chaos sous la forme d’un délitement socio-économique désastreux et potentiellement permanent. Bien avant 2050.

Il n’existe aucun moyen d’opérer un découplage absolu entre le Produit Intérieur Brut et la pression anthropique globale – ceux qui prétendent le contraire n’ont pas saisi la systémique des enjeux ou sont des lobbyistes au service d’une idéologie ou d’intérêts financiers. Dans tous les cas, leurs propos sont en déphasage avec les données et les connaissances scientifiques.

Il faudrait donc, en effet, changer en profondeur les règles de l’économie mondiale et nos modèles de sociétés, mais le système repose sur des ordres établis si solidement ancrés qu’il n’est pas possible, j’en ai peur, de le changer véritablement.

On peut en limiter la nocivité par la voie de la mobilisation politique et citoyenne, par la révolte et la Résistance (sous le radar ou frontales) : l’idéal serait que les diverses modalités de lutte et les différentes radicalités se comprennent comme des alliées mutuelles et se synergisent enfin.

En parallèle, il faut poser les fondations d’un système alternatif, même si l’on ne dispose pas de tous les moyens qu’il faudrait : à travers des expérimentations et des démonstrateurs territoriaux, il est temps de prouver et de faire savoir, via des récits inspirants, qu’il existe d’autres façons d’organiser les économies locales, de prendre les décisions collectivement, d’assurer aux gens un minimum vital, de développer des activités régénératives, de travailler sur le découplage entre qualité de vie et quantité énergétique et matérielle, de réaménager des espaces pour un vivre-ensemble salubre.

Il est possible de redonner du sens, de nourrir une culture du risque, de la responsabilité et de la solidarité, de créer de la cohésion, d’insuffler la conviction d’une appartenance à une même communauté de destin.

Le grand système extrêmement puissant va se casser la gueule, le technosolutionnisme atteint ses limites, les « solutions » sur lesquelles nous misons l’avenir et les grands projets que les privilégiés persistent à nourrir (conquête spatiale, IA, impérialisme, etc.) vont également buter sur la descente énergétique et matérielle. Il faut anticiper des bascules sociétales et s’y préparer collectivement, en coopération intra- et inter-territoriales, intra- et inter-communautaires, sans tomber dans le piège de l’entre-soi.

Un changement de civilisation à travers les territoires

LR&LP : Comment préparer un territoire ?

Arthur Keller : Cela fait appel à des principes, leviers, outils que j’enseigne à des élus et agents territoriaux, ou à des citoyens dans le cadre de formations et d’ateliers. Même si à ce jour il n’existe pas, à ma connaissance, de collectivité pleinement résiliente, il existe tout de même des initiatives vraiment intéressantes, des démarches volontaristes qui vont dans le bon sens.

Qu’il s’agisse de collectifs participant au mouvement des Villes en transition, de certains oasis Colibris, de dynamiques territoriales comme le projet TERA, de monnaies locales et de systèmes d’échanges locaux, de réserves communales de sécurité civile, il se passe des choses intéressantes et inspirantes, on ne part pas de zéro et c’est rassurant !

Une partie de ces projets territoriaux s’attaquent à un point clé : comment produire l’essentiel de façon pérenne, en tenant compte des évolutions écologiques (climat, cycle de l’eau, biodiversité, etc.) et de possibles ruptures d’approvisionnement critiques, en ne comptant que sur ce dont on peut disposer dans un rayon géographique maîtrisable.

En matière de production alimentaire, on a la chance inouïe d’avoir des approches qui cochent toutes les cases : l’agroécologie, l’agroforesterie et la permaculture réunies, avec des pratiques sur sol vivant, ont le potentiel de nourrir le monde entier de façon saine, variée, nutritive et savoureuse tout en régénérant les écosystèmes.

Des monnaies locales et des circuits courts locaux sont aussi créés, reliant les acteurs et habitants des territoires. Des expérimentations sociales aussi, portant par exemple sur un revenu minimum d’existence ou un revenu minimum universel, sur une dotation inconditionnelle d’autonomie ou une sécurité sociale alimentaire comme à Montpellier et Strasbourg, sont en cours. Ainsi que de multiples initiatives de démocratie délibérative et participative…

Les gens peuvent et doivent se réapproprier la gestion des Communs. Les collectivités ont la capacité de garantir l’accès à tous au minimum vital : à Montpellier, les premiers mètres cubes d’eau sont désormais gratuits. Il serait intéressant d’étendre ce principe à tout ce qui est nécessaire pour vivre dignement.

La question des récits est en vogue et pour une bonne raison, car il n’y aura pas de massification des changements de comportements sans l’émergence de nouveaux imaginaires. Plus encore que des œuvres de fiction, il est capital de transposer les transformations concrètes réussies en récits vivides et inspirants afin d’alimenter une dynamique culturelle.

LR&LP : Les mentalités sont-elles prêtes pour expérimenter et mettre en place tout ça ?

Arthur Keller : La promesse de plus de confort et moins de travail proposée de longue date par les promoteurs de l’innovation technique n’a pas été tenue. Aujourd’hui, ce même système nous explique qu’il faut travailler toujours plus pour se maintenir sur place. Le « progrès » s’est mué en un marathon qui enrôle de force, assaillit de dissonances cognitives et aboutit en général à un sentiment amer d’inassouvissement.

Ceux qui proposent la semaine de 4 jours sont traités de fous, comme le furent avant eux les défenseurs de la journée de 12 heures, puis de 10, puis de 8, puis les partisans du repos dominical, puis ceux des congés payés – deux, trois, quatre puis cinq semaines ! – puis ceux de la semaine de 35 heures.

Chaque progrès social se heurte aux chantres du productivisme forcené.

Les robots, les objets connectés et l’IA nous envahissent mais ne s’accompagnent pas d’une libération puisque rien n’est entrepris pour que les gens puissent vivre bien sans emploi ; au contraire, les postures idiotes prolifèrent qui assimilent le non-emploi à de la fainéantise et le fainéant à un sous-citoyen qui ne mériterait pas la fraternité de l’État-providence.

Les habitants des pays riches sont saturés de choix de consommation qui leur sont présentés comme un summum de liberté alors que cette surenchère aliène en créant de l’addiction – c’est-à-dire une dépendance, l’exact opposé d’une liberté –, une insatiabilité croissante et de la frustration : plus t’en as, plus t’en veux… jusqu’à la perte de sens totale.

Cette course folle ne rend pas seulement fou mais aussi malheureux. Au-delà d’un certain niveau de vie, il y a un point de rebroussement du bonheur ressenti.

On nous persuade que le bonheur découlerait d’une somme de petits plaisirs et que le plaisir proviendrait de la consommation et de l’accumulation, alors qu’il n’est pas de vrai bonheur sans la fierté de se battre pour les valeurs qu’on sait bonnes et sans la sagesse de l’autolimitation.

Il me semble d’ailleurs primordial de réapprendre la notion de liberté : une somme d’individualismes sans limites ne débouche jamais sur une société libre mais sur une dystopie dans laquelle quelques riches peuvent tout et la majorité presque rien.

La liberté de chacun découle d’une géométrie démocratiquement négociée des droits et des devoirs : quand des limites énergétiques et matérielles, biophysiques et écologiques, s’imposent aux sociétés humaines, ne pas les retranscrire rationnellement en libertés et en interdits et ne pas les décliner en codes sociaux et culturels ne peut qu’entraîner la dislocation des sociétés.

Ceux qui refusent de nouvelles limitations par principe, sans s’intéresser aux bouleversements à l’œuvre dont découle l’impératif desdites limitations, n’œuvrent pas au nom de la liberté ainsi qu’ils le prétendent mais dans l’espoir égoïste de maintenir leurs privilèges. Le monde change vite ; si nous voulons préserver ce qui compte, il nous faut vite changer aussi. » conclut Arthur Keller

Permalink
December 3, 2023 at 5:41:07 PM GMT+1

What is Corporate Memphis and Why is it Everywhere? – t-art magazinehttps://t-artmagazine.com/what-is-corporate-memphis-and-why-is-it-everywhere/

  • Advertising
  • Ads Madness
  • Attention Economy
  • Big Corpo
  • Advertising
  • Ads Madness
  • Attention Economy
  • Big Corpo

What is Corporate Memphis and Why is it Everywhere?

Sriya Choppara·April 5, 2021

If you’ve ever stepped foot in an Apple store, you’ve probably noticed the large, softly glowing graphic panels spanning every wall. You may recall Yukai Du’s vibrant and flat illustration from your last visit. Its candy-like color palette and the sharp silhouettes drawn with unusually rosy skin might have caught your eye. The art style used by Du, commonly seen while scrolling through Facebook or viewing the Google Doodle of the day, is not unfamiliar to you. This illustration style has been dubbed the corporate art style, or more pejoratively, ‘Corporate Memphis’.

Characterized by solid coloring, geometric shapes and cartoonish figures with disproportionately lanky limbs, Corporate Memphis has become increasingly ubiquitous since it was first popularized in 2017. While it debuted as a space-filler in social media, websites and SaaS (software as a service) products, the style can now be found as images accompanying website articles and on magazine covers. The style is well received by the majority of consumers and has become the go-to choice of illustration style in marketing and user interfaces. However, its omnipresence has spurred criticism for its apparent shift away from more intricate, detailed illustrations to what critics deem to be generic and uninspired.

The exact origins of Corporate Memphis are unclear. Some credit its creation to Alice Lee, an independent illustrator who partnered with Slack to design 40 illustrations for their website and platform. She has remarked that she was influenced by Mary Blair’s Disney concept art in the process of creating these warm and whimsical figures. Others credit BUCK, the design firm that created the style guide, illustration, and animation system called Alegria for Facebook’s ecosystem. Corporate Memphis is also said to be the illustrative analogy to the 80’s post-modernist design movement Memphis. Corporate Memphis’ geometric forms, along with its highly contrasted and bold color style are similar to that of Memphis furniture designs. Additionally, the way figures are warped in Corporate Memphis is reminiscent of how Memphis designers depict everyday objects in unusual ways.

Considering past trends in graphic design, the art style’s current ubiquity was inevitable. Take your phone’s interface, for instance. With every update, app icons have taken one step forward to a minimalistic, simplified, and flat look. The days of skeuomorphism are long gone. Big Tech companies have increased their demand for Corporate Memphis illustrations. Additionally, the progression to a minimalistic finish is evident in how several corporations have revamped and decluttered their previous busy and detailed logos.

There are several reasons why there is such mass appeal for simplified design, and why the Corporate Memphis style has flourished in particular. For one, the illustrations of people are often inclusive to the extent that some resemble anthropomorphic aliens. But no matter their appearance, people can see themselves in illustrated figures with skin as purple as a plum. The cheery color palettes and the characters reminiscent of childhood cartoons only add to their charm. The simple vector art has just the right amount of detail to please the eye while one swiftly skims through social media or lengthy policy updates.

Corporate Memphis is flamboyant. It’s spirited. It’s joyful. It’s most certainly prepossessing to the general public. But it’s exactly these elements that warrant greater scrutiny. Illustrations in the style, with its aggressively friendly expressions, portray a world that is uncannily utopian. The deliberate oversimplification in Corporate Memphis can give a false sense of security in situations where the opposite behavior should be observed. While it may not be obvious, the bubbly style can make consumers subconsciously lower their guard and lure them in. For instance, Corporate Memphis is often strategically placed in predatory subscription terms or privacy agreements. This smart marketing ploy for firms may have negative repercussions for users.

Other benefits that corporations receive from using Corporate Memphis are far less sinister. It is convenient for graphic designers to work in the style using vector art which makes illustrations easily scalable, animated, and replicated. Graphic designers are able to quickly alter designs according to the whims of their clients. The generic appearance also makes it simple for startups on a budget to use premade assets to accompany a wide variety of texts. Its homogeneous appearance lends to a sleek and professional look at a small price point. Not to mention, graphic designers don’t need a particularly high level of skill to create rudimentary assets in the style.

That isn’t to say that it’s impossible for illustrations in the Corporate Memphis style to be unique. While Corporate Memphis has very distinctive features that make art of its genre appear similar, it is wrong to say that it lacks creativity. Take Slack versus Facebook’s illustrations, for instance. The fluidity of lines, color palettes, proportions, textures, digital brush choice and methods of shading make a world of difference.

Editorial illustrations with elements of the style are evidence of how artists have put their own spin on it. Illustrator Jing Wei’s characters have consistently thick limbs, a rich color palette, and distinctive eye shape. Her experience in printmaking is clearly an influence on her commercial art. Illustrator Alex Eben Meyer uses colors so bright they seem to pop out of the screen and overlays shapes to mimic paper collages. While it is more likely that these illustrators drew with the intention of echoing Art Deco or Minimalism, the solid coloring and jointless limbs parallel Corporate Memphis’ key attributes.

So what is the future of Corporate Memphis? Despite criticism that it is generic, or even lazy, the decision of several companies to invest in and use it in their visual branding is a sign of its appreciation by the majority of its audience. While its appeal may come with questionable implications, it is also hard to argue against its radical inclusivity and how it allows many to easily project onto the cartoon-like figures. It is important to consider it a warning sign when it accompanies consequential texts like terms and conditions, but I appreciate its style and love to see how different artists put their own spin on it. In the light of its pervasiveness, I am keen to believe many are fond of it too. Whether its influence is seen in accompaniments to editorials or fillers to bedeck empty space, Corporate Memphis’ sweeping popularity will not disappear anytime soon.

Permalink
December 3, 2023 at 5:18:39 PM GMT+1

Résoudre les biais n’est pas si simple ! | Hubert Guillaudhttps://hubertguillaud.wordpress.com/2023/11/28/resoudre-les-biais-nest-pas-si-simple/

  • Algorithms
  • Ads Madness
  • Algorithms
  • Ads Madness

Résoudre les biais n’est pas si simple !

Pour AlgorithmWatch, le journaliste John Albert (@jaalbrt) a enquêté sur la façon dont Meta vient de proposer de résoudre un de ses biais algorithmique récurrent, à savoir le fait que la publicité ciblée désavantage certaines catégories de publics en ne proposant pas certains produits à certains publics, par exemple, des offres immobilières à certains publics de couleurs selon leur localisation, des offres d’emplois de camionneurs à des femmes… Un biais documenté depuis longtemps et qui a donné lieu à des poursuites des autorités américaines, dès 2019 par le ministère du logement américain, qui avait conduit Facebook a débrancher certaines catégories du ciblage publicitaire, comme l’âge, le genre, le sexe ou les caractéristiques éthniques ou religieuses de certains types de publicités, notamment celles consacrées au logement, à l’emploi et au crédit. En juin 2022, Facebook avait réglé ce différend en acceptant de payer une amende (115 000$) et en s’engageant à développer un système pour débiaiser les publicités immobilières.

Car la disparition des catégories de ciblage de la population ne suffit pas : l’algorithme publicitaire de Facebook est capable de comprendre que les femmes cliquent moins sur les publicités proposant un emploi de camionneur et donc ne pas le leur proposer. Il reproduit et amplifie ainsi les biais existants en désavantageant certains publics déjà marginalisés. Or, cette discrimnation automatisée est illégale puisque le droit interdit la discrimination, notamment sur des critères protégés comme la race, l’âge ou le sexe. La difficulté, bien sûr, c’est que cette discrimination est particulièrement invisible, puisque les gens qui pourraient contester l’inéquité de la diffusion publicitaire n’y ont absolument pas eu accès, puisqu’ils ne voient même pas les pubs qu’ils auraient du voir !

Pour corriger les biais, il faut pouvoir les mesurer !

Suite à ce règlement, Meta a donc créé et déployé un nouvel algorithme, un “système de réduction de la variance” (VRS) pour tenter de limiter les biais de ses algorithmes d’apprentissage automatique dans son système de ciblage et de diffusion publicitaire. “Au lieu d’optimiser les clics, le VRS est conçu pour optimiser la “précision” égale de la diffusion des publicités sur les publics cibles éligibles. Une fois qu’une publicité est vue par suffisamment d’utilisateurs, le VRS mesure l’âge global, le sexe et la “répartition estimée de la race ou de l’origine ethnique” de ceux qui ont déjà vu la publicité et les compare avec le public éligible plus large qui aurait pu potentiellement voir la publicité, puis ajuste ensuite la diffusion des annonces en conséquence”. En gros, Meta corrige ses biais en regardant ses biais de diffusion. Super ! me direz-vous ! Sauf que ce n’est pas si simple car les biais de diffusion dépendent justement de la diffusion et des moyens pour la mesurer et la corriger, et cela varie beaucoup d’une catégorie d’annonce l’autre, d’un territoire où elle est diffusée l’autre.

Un rapport de conformité produit par le cabinet Guidehouse affirme que le VRS de Meta fonctionne comme prévu. Mais les lacunes du rapport rendent ces affirmations difficiles à vérifier, estime John Albert pour AlgorithmWatch. En fait, les publicités discriminatoires continuent de sévir sur Facebook dans des domaines autres que le logement, ainsi que dans d’autres pays que les Etats-Unis comme les pays européens, où il est peu probable que le VRS soit mis en œuvre, notamment parce que le système de correction n’est pas si simple à mettre en oeuvre, on va le voir.

Pour Daniel Kahn Gillmor, défenseur de la vie privée et technologue à l’ACLU, ce rapport très technique montre que Guidehouse n’a pas eu accès aux données de Meta et n’a accompli son travail que depuis des résultats fournis par Meta. Pour le chercheur Muhammad Ali, un des auteurs de l’étude de 2019 qui avait conduit à dénoncer le problème du ciblage publicitaire discriminatoire sur Facebook, il y a un vrai effort derrière le VRS, mais la correction est particulièrement complexe et particulièrement limitée puisqu’elle ne s’appliquera qu’aux annonces de logement. Si Meta a annoncé vouloir l’appliquer aux annonces portant sur l’emploi et le crédit, il est peu probable qu’il s’étende à d’autres catégories et au-delà des Etats-Unis, car le VRS pour appliquer ses corrections sur les annonces immobilières doit mobiliser des données spécifiques, en l’occurrence les données du recensement américain afin de calculer l’origine ethnique des audiences publicitaires selon leur localisation pour les corriger ! Pour chaque correction des publics, il faut pouvoir disposer de correctifs ! Un système de ce type ne pourrait pas fonctionner en Inde par exemple, où il n’y a pas de données géolocalisées sur les castes par exemple, ou en France où l’on ne dispose pas de données ethniques. Corriger les biais nécessite donc de fournir les systèmes en données générales, par exemple des données sur l’origine ethniques des populations qui ne sont pas disponibles partout. Sans compter que ces corrections qui visent à réduire les écarts de performance entre groupes démographiques risquent surtout de produire un “égalitarisme strict par défaut”, qu’une forme d’équité.

Enfin, il y a d’innombrables autres biais dans les catégories de publics que ces systèmes produisent à la volée, par exemple quand une personne est identifiée comme s’intéressant aux jeux d’argent et de paris en ligne et qui va être sur-sollicitée sur ces questions, au risque de renforcer ses dépendances plutôt que de le protéger de ses vulnérabilités (ou d’autres vulnérabilités, comme le montrait The Markup, quand ils analysaient les catégories de la plateforme publicitaire Xandr – j’en parlais là), et l’amplification des biais risque d’être encore plus forte et plus difficile à corriger quand les publics cibles sont particulièrement spécifiques. Sans compter que finalement, c’est le but du ciblage publicitaire de produire des catégorisations et donc de la discrimination : ainsi quand on cherche à montrer une annonce à des cibles ayant tel niveau de revenu, c’est bien à l’exclusion de tous les autres (pour autant que ce ciblage fonctionne, ce qui est bien plus rarement le cas qu’énoncé, comme je l’évoquais en observant les troubles du profilage).

Selon la loi européenne sur les services numériques, les plateformes ne sont plus autorisées à cibler des publicités en utilisant des catégories de données “sensibles”, comme la race, le sexe, la religion ou l’orientation sexuelle et doivent atténuer les risques systémiques découlant de leurs services. Reste que l’approche extrêmement fragmentaire de Meta pour atténuer les biais, tels que le propose le VRS, risque d’être difficile à appliquer partout, faute de données permettant de corriger les biais disponibles. Le débiaisage pose la question de quelles corrections appliquer, comment les rendre visibles et jusqu’où corriger ?

Plutôt que des rapports, protéger les chercheurs !

Cette analyse sur comment Meta peine à débiaiser a d’autres vertus que de nous montrer les limites intrinsèques du débiaisage. Elle permet également de constater que la réponse réglementaire n’agit pas au bon niveau. En effet, si le DSA européen prévoit que les plateformes se soumettent à des audits indépendants – à l’image de ce que vient de produire Guidehouse pour Meta en contrôlant comment les plateformes se conforment à l’obligation d’identifier et d’atténuer les risques et les discriminations -, le risque est fort que ce contrôle produise beaucoup d’audit-washing, les plateformes définissant les normes de leurs contrôles (et pour l’instant chacune le fait différemment), choisissant leurs auditeurs et produisant les données qu’elles souhaitent partager. Pour l’instant, nous sommes confrontés à des mesures d’audit mal définies qui risquent surtout d’être mal exécutées, expliquent dans un rapport sur le sujet pour le German Marshall Fund, les spécialistes Ellen Goodman et Julia Trehu en rappelant les règles de l’audit et en pointant l’exemplarité du cadre d’audit définit par la cour des comptes hollandaise lors de l’inspection de plusieurs algorithmes utilisés par les services publics. Les spécialistes insistent sur la nécessité de fournir un cadre, des normes, des critères de certification établies par un régulateur pour garantir la qualité des audits.

Or, comme l’explique très pertinemment John Albert, les rapports de conformité ne remplacent pas un examen externe. A terme, nous risquons surtout d’être inondés de rapports de conformités tous plus complexes les uns que les autres, à l’image [des rapports sur la transparence des plateformes que vient de recueillir la commission européenne](https://digital-strategy.ec.europa.eu/en/news/very-large-online-platforms-and-search-engines-publish-first-transparency-reports-under-dsa#:~:text=The transparency reports must include,of orders they receive from) qui ne nous apprennent pas grande chose et ne semblent pas un levier pour changer les choses (on a appris seulement le nombre de modérateurs officiels des plateformes, comme l’explique le Monde, ou encore les catégories de modérations… mais, comme le souligne TechPolicyPress, aucun rapport n’évoque d’autres questions inscrites au DSA européen, comme des informations sur comment les plateformes interdisent la publicité aux enfants, comment elles comptent faciliter la possibilité de se désinscrire des systèmes de recommandation, ou quelles solutions de partages de données avec la recherche ou les autorités comptent-elles mettre en place. Aucune non plus n’a mis en place d’audit des risques systémiques). Comme l’explique la journaliste Gaby Miller dans un autre article de TechPolicyPress, la commission européenne devrait envisager d’expliciter les lignes directrices des informations qu’elle attend de ces rapports, afin qu’ils puissent être plus comparables entre eux.

Le caractère très limité des informations fournies par les plateformes ne suffiront pas. “La neutralité supposée des plateformes dépend de notre capacité à les surveiller”, disions-nous. Il faut pouvoir se doter d’audits contradictoires, véritablement indépendants, capables de tester les systèmes algorithmiques des plateformes, de mesurer et de réagir aux problèmes. Pour cela, il faut que ces grandes entreprises permettent aux chercheurs de chercher, de collecter des données, de recruter des utilisateurs pour produire des mesures indépendantes et ce n’est pas ce à quoi nous assistons actuellement, où les plateformes se ferment bien plus à la recherche qu’elles ne s’y ouvrent ! Pour cela, il faudrait que les autorités offrent des protections aux chercheurs, insiste John Albert : “Tant que nous n’aurons pas mis en place des protections légales pour les chercheurs d’intérêt public et que nous n’aurons pas sécurisé les voies d’accès aux données des plateformes, nous serons obligés de nous occuper des auto-évaluations et des rapports d’audit des plateformes. Un flot de ces rapports est en route. Mais ces mesures ne remplacent pas une recherche véritablement indépendante, essentielle pour tenir les plateformes responsables des risques que leurs services font peser sur la société.”

Hubert Guillaud

Permalink
December 1, 2023 at 8:40:55 PM GMT+1
2 / 7
Links per page
  • 20
  • 50
  • 100
130 shaares · Shaarli · The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community · Documentation · Theme : Stack · Font : DINish
Fold Fold all Expand Expand all Are you sure you want to delete this link? Are you sure you want to delete this tag? The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community