Weekly Shaarli
Week 32 (August 7, 2023)

Your Computer Should Say What You Tell It To Say
By Cory Doctorow and Jacob Hoffman-Andrews August 7, 2023
WEI? I’m a frayed knot
Two pieces of string walk into a bar.
The first piece of string asks for a drink.
The bartender says, “Get lost. We don’t serve pieces of string.”
The second string ties a knot in his middle and messes up his ends. Then he orders a drink.
The bartender says, “Hey, you aren’t a piece of string, are you?”
The piece of string says, “Not me! I'm a frayed knot.”
Google is adding code to Chrome that will send tamper-proof information about your operating system and other software, and share it with websites. Google says this will reduce ad fraud. In practice, it reduces your control over your own computer, and is likely to mean that some websites will block access for everyone who's not using an "approved" operating system and browser. It also raises the barrier to entry for new browsers, something Google employees acknowledged in an unofficial explainer for the new feature, Web Environment Integrity (WEI).
If you’re scratching your head at this point, we don’t blame you. This is pretty abstract! We’ll unpack it a little below - and then we’ll explain why this is a bad idea that Google should not pursue.
But first…
Some background
When your web browser connects to a web server, it automatically sends a description of your device and browser, something like, "This session is coming from a Google Pixel 4, using Chrome version 116.0.5845.61." The server on the other end of that connection can request even more detailed information, like a list of which fonts are installed on your device, how big its screen is, and more.
This can be good. The web server that receives this information can tailor its offerings to you. That server can make sure it only sends you file formats your device understands, at a resolution that makes sense for your screen, laid out in a way that works well for you.
But there are also downsides to this. Many sites use "browser fingerprinting" - a kind of tracking that relies on your browser's unique combination of characteristics - to nonconsensually identify users who reject cookies and other forms of surveillance. Some sites make inferences about you from your browser and device in order to determine whether they can charge you more, or serve you bad or deceptive offers.
Thankfully, the information your browser sends to websites about itself and your device is strictly voluntary. Your browser can send accurate information about you, but it doesn't have to. There are lots of plug-ins, privacy tools and esoteric preferences that you can use to send information of your choosing to sites that you don't trust.
These tools don't just let you refuse to describe your computer to nosy servers across the internet. After all, a service that has so little regard for you that it would use your configuration data to inflict harms on you might very well refuse to serve you at all, as a means of coercing you into giving up the details of your device and software.
Instead, privacy and anti-tracking tools send plausible, wrong information about your device. That way, services can't discriminate against you for choosing your own integrity over their business models.
That's where remote attestation comes in.
Secure computing and remote attestation
Most modern computers, tablets and phones ship from the factory with some kind of "secure computing" capability.
Secure computing is designed to be a system for monitoring your computer that you can't modify, or reconfigure. Originally, secure computing relied on a second processor - a "Trusted Platform Module" or TPM - to monitor the parts of your computer you directly interact with. These days, many devices use a "secure enclave" - a hardened subsystem that is carefully designed to ensure that it can only be changed with the manufacturer’s permission..
These security systems have lots of uses. When you start your device, they can watch the boot-up process and check each phase of it to ensure that you're running the manufacturer's unaltered code, and not a version that's been poisoned by malicious software. That's great if you want to run the manufacturer's code, but the same process can be used to stop you from intentionally running different code, say, a free/open source operating system, or a version of the manufacturer's software that has been altered to disable undesirable features (like surveillance) and/or enable desirable ones (like the ability to install software from outside the manufacturer's app store).
Beyond controlling the code that runs on your device, these security systems can also provide information about your hardware and software to other people over the internet. Secure enclaves and TPMs ship with cryptographic "signing keys." They can gather information about your computer - its operating system version, extensions, software, and low-level code like bootloaders - and cryptographically sign all that information in an "attestation."
These attestations change the balance of power when it comes to networked communications. When a remote server wants to know what kind of device you're running and how it's configured, that server no longer has to take your word for it. It can require an attestation.
Assuming you haven't figured out how to bypass the security built into your device's secure enclave or TPM, that attestation is a highly reliable indicator of how your gadget is set up.
What's more, altering your device's TPM or secure enclave is a legally fraught business. Laws like Section 1201 of the Digital Millennium Copyright Act as well as patents and copyrights create serious civil and criminal jeopardy for technologists who investigate these technologies. That danger gets substantially worse when the technologist publishes findings about how to disable or bypass these secure features. And if a technologist dares to distribute tools to effect that bypass, they need to reckon with serious criminal and civil legal risks, including multi-year prison sentences.
WEI? No way!
This is where the Google proposal comes in. WEI is a technical proposal to let servers request remote attestations from devices, with those requests being relayed to the device's secure enclave or TPM, which will respond with a cryptographically signed, highly reliable description of your device. You can choose not to send this to the remote server, but you lose the ability to send an altered or randomized description of your device and its software if you think that's best for you.
In their proposal, the Google engineers claim several benefits of such a scheme. But, despite their valiant attempts to cast these benefits as accruing to device owners, these are really designed to benefit the owners of commercial services; the benefit to users comes from the assumption that commercial operators will use the additional profits from remote attestation to make their services better for their users.
For example, the authors say that remote attestations will allow site operators to distinguish between real internet users who are manually operating a browser, and bots who are autopiloting their way through the service. This is said to be a way of reducing ad-fraud, which will increase revenues to publishers, who may plow those additional profits into producing better content.
They also claim that attestation can foil “machine-in-the-middle” attacks, where a user is presented with a fake website into which they enter their login information, including one-time passwords generated by a two-factor authentication (2FA) system, which the attacker automatically enters into the real service’s login screen.
They claim that gamers could use remote attestation to make sure the other gamers they’re playing against are running unmodified versions of the game, and not running cheats that give them an advantage over their competitors.
They claim that giving website operators the power to detect and block browser automation tools will let them block fraud, such as posting fake reviews or mass-creating bot accounts.
There’s arguably some truth to all of these claims. That’s not unusual: in matters of security, there’s often ways in which indiscriminate invasions of privacy and compromises of individual autonomy would blunt some real problems.
Putting handcuffs on every shopper who enters a store would doubtless reduce shoplifting, and stores with less shoplifting might lower their prices, benefitting all of their customers. But ultimately, shoplifting is the store’s problem, not the shoppers’, and it’s not fair for the store to make everyone else bear the cost of resolving its difficulties.
WEI helps websites block disfavored browsers
One section of Google’s document acknowledges that websites will use WEI to lock out browsers and operating systems that they dislike, or that fail to implement WEI to the website’s satisfaction. Google tentatively suggests (“we are evaluating”) a workaround: even once Chrome implements the new technology, it would refuse to send WEI information from a “small percentage” of computers that would otherwise send it. In theory, any website that refuses visits from non-WEI browsers would wind up also blocking this “small percentage” of Chrome users, who would complain so vociferously that the website would have to roll back their decision and allow everyone in, WEI or not.
The problem is, there are lots of websites that would really, really like the power to dictate what browser and operating system people can use. Think “this website works best in Internet Explorer 6.0 on Windows XP.” Many websites will consider that “small percentage” of users an acceptable price to pay, or simply instruct users to reset their browser data until a roll of the dice enables WEI for that site.
Also, Google has a conflict of interest in choosing the “small percentage.” Setting it very small would benefit Google’s ad fraud department by authenticating more ad clicks, allowing Google to sell those ads at a higher price. Setting it high makes it harder for websites to implement exclusionary behavior, but doesn’t directly benefit Google at all. It only makes it easier to build competing browsers. So even if Google chooses to implement this workaround, their incentives are to configure it as too small to protect the open web.
You are the boss of your computer
Your computer belongs to you. You are the boss of it. It should do what you tell it to.
We live in a wildly imperfect world. Laws that prevent you from reverse-engineering and reconfiguring your computer are bad enough, but when you combine that with a monopolized internet of “five giant websites filled with screenshots of text from the other four,” things can get really bad.
A handful of companies have established chokepoints between buyers and sellers, performers and audiences, workers and employers, as well as families and communities. When those companies refuse to deal with you, your digital life grinds to a halt.
The web is the last major open platform left on the internet - the last platform where anyone can make a browser or a website and participate, without having to ask permission or meet someone else’s specifications.
You are the boss of your computer. If a website sets up a virtual checkpoint that says, “only approved technology beyond this point,” you should have the right to tell it, “I’m no piece of string, I’m a frayed knot.” That is, you should be able to tell a site what it wants to hear, even if the site would refuse to serve you if it knew the truth about you.
To their credit, the proposers of WEI state that they would like for WEI to be used solely for benign purposes. They explicitly decry the use of WEI to block browsers, or to exclude users for wanting to keep their private info private.
But computer scientists don't get to decide how a technology gets used. Adding attestation to the web carries the completely foreseeable risk that companies will use it to attack users' right to configure their devices to suit their needs, even when that conflicts with tech companies' commercial priorities.
WEI shouldn't be made. If it's made, it shouldn't be used.
So what?
So what should we do about WEI and other remote attestation technologies?
Let's start with what we shouldn't do. We shouldn't ban remote attestation. Code is speech and everyone should be free to study, understand, and produce remote attestation tools.
These tools might have a place within distributed systems - for example, voting machine vendors might use remote attestation to verify the configuration of their devices in the field. Or at-risk human rights workers might send remote attestations to trusted technologists to help determine whether their devices have been compromised by state-sponsored malware.
But these tools should not be added to the web. Remote attestations have no place on open platforms. You are the boss of your computer, and you should have the final say over what it tells other people about your computer and its software.
Companies' problems are not as important as their users' autonomy
We sympathize with businesses whose revenues might be impacted by ad-fraud, game companies that struggle with cheaters, and services that struggle with bots. But addressing these problems can’t come before the right of technology users to choose how their computers work, or what those computers tell others about them, because the right to control one’s own devices is a building block of all civil rights in the digital world..
An open web delivers more benefit than harm. Letting giant, monopolistic corporations overrule our choices about which technology we want to use, and how we want to use it, is a recipe for solving those companies' problems, but not their users'.

I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)
Updated: August 8, 2023
First Published: August 7, 2023 by Jane Friedman 60 Comments
Update (afternoon of Aug. 7): Hours after this post was published, my official Goodreads profile was cleaned of the offending titles. I did file a report with Amazon, complaining that these books were using my name and reputation without my consent. Amazon’s response: “Please provide us with any trademark registration numbers that relate to your claim.” When I replied that I did not have a trademark for my name, they closed the case and said the books would not be removed from sale.
Update (morning of Aug. 8): The fraudulent titles appear to be entirely removed from Amazon and Goodreads alike. I’m sure that’s in no small part due to my visibility and reputation in the writing and publishing community. What will authors with smaller profiles do when this happens to them? If you ever find yourself in a similar situation, I’d start by reaching out to an advocacy organization like The Authors Guild (I’m a member).
Update (evening of Aug. 8): Since these fake books have been removed, I’ve added titles and screenshots below, as well as an explanation of why I believe the books are AI generated.
There’s not much that makes me angry these days about writing and publishing. I’ve seen it all. I know what to expect from Amazon and Goodreads. Meaning: I don’t expect much, and I assume I will be continually disappointed. Nor do I have the power to change how they operate. My energy-saving strategy: move on and focus on what you can control.
That’s going to become much harder to do if Amazon and Goodreads don’t start defending against the absolute garbage now being spread across their sites.
I know my work gets pirated and frankly I don’t care. (I’m not saying other authors shouldn’t care, but that’s not a battle worth my time today.)
But here’s what does rankle me: garbage books getting uploaded to Amazon where my name is credited as the author, such as:
- A Step-by-Step Guide to Crafting Compelling eBooks, Building a Thriving Author Platform, and Maximizing Profitability
- How to Write and Publish an eBook Quickly and Make Money
- Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon
- Publishing Power: Navigating Amazon’s Kindle Direct Publishing
- Igniting Ideas: Your Guide to Writing a Bestseller eBook on Amazon
Whoever’s doing this is obviously preying on writers who trust my name and think I’ve actually written these books. I have not. Most likely they’ve been generated by AI. (Why do I think this? I’ve used these AI tools extensively to test how well they can reproduce my knowledge. I also do a lot of vanity prompting, like “What would Jane Friedman say about building author platform?” I’ve been blogging since 2009—there’s a lot of my content publicly available for training AI models. As soon as I read the first pages of these fake books, it was like reading ChatGPT responses I had generated myself.)
It might be possible to ignore this nonsense on some level since these books aren’t receiving customer reviews (so far), and mostly they sink to the bottom of search results (although not always). At the very least, if you look at my author profile on Amazon, these junk books don’t appear. A reader who applies some critical thinking might think twice before accepting these books as mine.
Still, it’s not great. And it falls on me, the author—the one with a reputation at stake—to get these misleading books removed from Amazon. I’m not even sure it’s possible. I don’t own the copyright to these junk books. I don’t exactly “own” my name either—lots of other people who are also legit authors share my name, after all. So on what grounds can I successfully demand this stop, at least in Amazon’s eyes? I’m not sure.
To add insult to injury, these sham books are getting added to my official Goodreads profile. A reasonable person might think I control what books are shown on my Goodreads profile, or that I approve them, or at the very least I could have them easily removed. Not so.
If you need to have your Goodreads profile corrected—as far as the books credited to you—you have to reach out to volunteer “librarians” on Goodreads, which requires joining a group, then posting in a comment thread that you want illegitimate books removed from your profile.
When I complained about this on Twitter/X, an author responded that she had to report 29 illegitimate books in just the last week alone. 29!
With the flood of AI content now published at Amazon, sometimes attributed to authors in a misleading or fraudulent manner, how can anyone reasonably expect working authors to spend every week for the rest of their lives policing this? And if authors don’t police it, they will certainly hear about it, from readers concerned about these garbage books, and from readers who credulously bought this crap and have complaints. Or authors might not hear any thing at all, and lose a potential reader forever.
We desperately need guardrails on this landslide of misattribution and misinformation. Amazon and Goodreads, I beg you to create a way to verify authorship, or for authors to easily block fraudulent books credited to them. Do it now, do it quickly.
Unfortunately, even if and when you get these insane books removed from your official profiles, they will still be floating around out there, with your name, on two major sites that gets millions of visitors, just waiting to be “discovered.” And there’s absolutely nothing you can do about it.