Thanks for the link. I missed that original discussion.
It’s fascinating to read the 2023 takes now that we are actually living through the scaling phase he predicted. The concept of AI betrayal feels even more relevant today than it did then
I strongly dislike that this title has been modified to editorialize (presently titled as "Bruce Schneier: AI and the scaling of betrayal"). From the guidelines:
> please use the original title, unless it is misleading or linkbait; don't editorialize.
The title should be "AI and Trust", or "AI and Trust (2023)"
It's crazy that the marketplace seems to be an ongoing experiment in maximizing the number of times a company can defect, minimizing consumer anger, and exploiting assumptions of trust and good faith as frequently as possible without causing the consumer to defect completely. And it appears they've optimized that; we put up with shrinkflation, industrial waste repurposed as filler, processed ingredients derived from industrial wastes, high quality products debased and degraded until all that remains is a memory of a flavor and the general shape, color and texture. Big AG factory farming, pharma, healthcare products, all the rest - you think you can trust that a thing is the thing it's always been and we all assume it is, but nope.
Scratch any surface and the gilt flakes off - almost nothing can be trusted anymore - the last 30-40 years consolidated a whole lot of number-go-up, profit at any cost, ruthless exploitation. Nearly every market, business, and product in the US has been converted into some pitiful, profit optimal caricature of what quality should look like.
AI is just the latest on a long, long list of things that you shouldn't trust, by default, unless you have explicit control and do it yourself. Everywhere else, everything that matters will be useful to you iff there's no cost or leverage lost to the provider.
That profit optimal caricature is what we call moral hazard in risk management. When a system is optimized purely for short-term extraction it offloads the long-term tail risk onto the consumer or society.
We see this with cheap IoT devices that have zero security updates the manufacturer saves $0.50 on a chip and the consumer eventually pays for it in identity theft or botnet attacks. It’s an externality that isn't priced in.
The "meta" has been solved and everyone's just min-maxing now. The few who aren't min-maxing are considered a waste.
AI, crypto, etc. feels like potentially new meta opportunities and it is eerie how similar the mania is whenever a new major patch for a game is released. Everyone immediately starts exploring how to exploit and min-max the new niche. Everyone wants to be the first to "discover" a viable meta.
Competition nowadays is so intense and fine-grained. Every new innovation or exploration is eventually folded into the existing exploits especially in monopolistic markets. Pricing models don’t change, revenue streams neither, consumer rarely benefits from these optimisation efforts, all leads to greater profit margins by any means.
It sucks for the ones who just want to play the game as "intended". The min-maxers always ruin it for everyone else. The devs ultimately balance the game around the few percent who min-max and everyone else just has to deal with it or stop playing. And the they say "don't blame the players, blame the game" but the game is literally being warped because of the players.
Also, often the new meta doesn't even make sense and the changes need to be rolled back. So all that pain and hustle will often be for nothing, but a lot of players will end up having a bad taste of the game altogether. So the damage has been done and a roll back can't fix it.
I can't accept this strange definitional divide between interpersonal trust and social trust. Trust is an infinitely grey experience, and varies situation to situation and time to time.
Trust is just a word we use to describe how confident we are that the future will correspond to our expectations. Friends can lose the money you gave them to buy something, credit card machines can fail, AIs can order you the wrong product, I could get in a car accident on the way to the store. Do I "trust" that these schemes will go smoothly? Well, mostly (except the AI one).
I don't see a category error because there aren't categories here.
It's absolutely the correct distinction to draw. AI will be able to present as a person while behaving as a system. We don't have intuitions for dealing with that.
This to me is the most important point in the whole text:
"We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants."
I've not think about it like that, but I think it's a great way to legislate.
The fiduciary model is the only regulatory framework that actually scales. In insurance (my field), we see the difference daily: a captive agent works for the carrier while an independent broker often has a pseudo-fiduciary duty to the client.
If we applied that to data, your AI assistant would legally have to prioritize your privacy over the vendor's ad revenue. Right now, the incentives are completely inverted.
A computer guy on a policy wonk reading diet makes for boring reading.
Policy wonks are often systemizers who think of society as a machine. That’s why they take the intuitive concept—scarcely even needs explaining—of informal everyday rituals like queueing and repackage it as yesteryear’s buzzword “trust”. We don’t need extrinsic rewards to queue politely. Amazing?
A computer guy is gonna take that and explain to us, of course, that society is like a machine. Running on trust. That’s the oil or whatever. Because there aren’t enough formal transactions to explain all the minute well-behavedeness.
Then condescend about how we think of (especially) corporations as friends. Sigh.
What policy wonks are intentionally blind to are all the people who “trust” by not making a fuzz. By just going along with it. Apathy and being resigned to your fate looks the same as trust from an affluent picket-fence distance. Or like being a naive friend to corporations.
The conclusion is as exciting as the thesis. Status quo with bad bad corporations. But the government must regulate the bad corporations.
I’m sure I’ve commented on this before. But anyway. Another round.
Title should be: AI and Trust
reply