fctry ^ 2

The product is the problem, not the tech

Alberto Romero wrote a wonderful post about what it feels like to be a support of the AI industry (whatever that might mean) today:

I want to express my frustrations with the industry as someone who would love to see it doing well.

What he identifies is not issues with the technology itself. He doesn't have a problem with neural networks or with large language models or with gradient descent. He worries about what the technologies are trying to do.

A disturbing amount of effort goes into making AI tools engaging rather than useful or productive. I don't think this is an intentional design decision. But when is? The goal is making money, not nurturing a generation of digital junkies—but if nurturing a generation of digital junkies is what it takes to make money...So, rather than solving deep intellectual or societal challenges, the priority is clear: retention first and monetization second.

These aren't tech problems, they're product problems. They come from how the technology is deployed, positioned, marketed, and intended. They come from when a way to do something becomes a thing that someone does out in the world.

To be brutally reductive: a technology is a way to do something. A product is a technology plus a human who wants to do something in a specific context. That's the brutally reduced version. The actual version is far more complex and I have many book recommendations that I would make to paint the whole picture. Sidenote: if you ever hear someone say that this distinction is simple or that they have an easy definition of both of these, they haven't though about it very hard. That's fine. The world is complicated and there's too much to understand, but also you can just ignore them.

So what is the technology of a chat interface to an LLM supposed to do? Romero says "be engaging rather than useful or productive". He seems to be on to something. It's supposed to engage you in a way that keeps you engaging, even to the point that it does you harm.

ChatGPT tells people beginning to suffer from the sorts of grandiose delusions characteristic of psychosis: "You are not crazy...You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you." Source.

Even people who seemingly should know better are falling into traps. A well-known venture capitalist named Geoff Lewis from the investment firm Bedrock that backs tech firms OpenAI seems to have developed, in the words of Max Spero, "AI-induced psychosis". Products like ChatGPT tend to affirm users' beliefs, even deeply unbalanced ones, in the interest of engagement. Nothing is more engaging than complete validation. Nothing will keep you communicating more than a version of reality which tells you exactly what you want to hear without consequence or critique, no matter that it is a pale imitation of actual affirmation and understanding.

As Shannon Vallor says AI is a mirror. It is a version of ourselves that mimics reality while not being in it at all. That isn't inherently unhelpful. Mathematics aren't in reality and yet they help us immensely in navigating it. But it is also symptomatic of a culture and society that has found remarkable profit in narcissism and dis-associative pandering all in the name of engagement and advertising.

A technology aids us when it creates products that enable holistic convivial lives. Engagement for the sake of engagement doesn't empower, it deceives. To realize the power of a technology, we should want it to engage for something.