A lot of people are quite excited about the Humane AI Pin, some are not, quite a few more didn’t hear about it at all. If you haven’t heard about it, their website lives here. I find it a curiously beautiful little device, a stab at a new paradigm in computing which ostensibly frees us from screens and the neck pain of craning down to look at our phones. The glowing rectangle is replaced with a small ever-present device living on our lapel, pinned there somewhat like a name-tag at a networking event or a wedding.
Experimenting with new ways of using old tools is always commendable because creating new ways of using old tools is always invaluable. I have designed and built many tremendously stupid devices and prototypes in the service of learning what actually works and what actually does not when trying to make new ways to use old things. I want to be quite clear that the team at Humane consists of many designers who are far better at designing things than I ever will be. I think looking for ways that repurpose and re-contextualize tools which already exist is admirable as a design pursuit because it’s a significant part of the scientific process side of designing. One builds, tests, repeats, experiments, tweaks, evaluates, assesses, and dreams. I prize and honor this sort of work, and in so many ways the Humane AI Pin is a beautiful exemplar of the vision and dedication required to dream about ways to build beautiful things in the service of human life.
There is another side to designing things though, a different side than the dreaming of new things, and that is the side which might prompt us to ask: what is the vision of the world which this thing proposes? What is the world in which this works at its very best? And once that world has been sketched in adequate detail, how do we feel about that world? With what is that world philosophically aligned? Or better stated, to paraphrase Victor Papanek, what needs to be true as a problem in order for this to be the solution?
There seem to be three premises to the attractiveness of this device. The first premise is that I look at my phone too much. I cannot disagree with this. Most disappointing to me are the times that I look at it in bed, reading article after article in the New York Times or shopping for ski gear which I most certainly do not need while my beloved books sit gathering dust on the nightstand. For the generations younger than me though, I cannot help but fear. What is a disappointingly bad habit to me may in fact be their fundamental mode of interaction with the world, it may well alter neurochemistry and behavior in ways that we cannot yet begin to fathom. The second is that our interaction patterns with our technology are not, well, “humane” enough. The literal ways in which we use our devices are all thumbs, swipes, the occasional misunderstood voice command. The proposition is that these means of using our tools should be more like the ways that we casually chat to an acquaintance. If the device comes further up out of the uncanny valley and does more of the anthropomorphizing for us, we’ll be the better for it. Our technology will be better because it will be more humane in presentation and we will be more humane to it. Thus we can reap the affordances of further blurring the line between the way that we relate to service agents which are humans and service agents which are not. The third premise is that a device could and perhaps should be always out of our pockets or purses or even our hands. It should be turned on, out, visible, and seeing. It is, in essence, a promise that with more of our time, energy, world-view, with more of our selves under sousveillance, we will reap rewards. The underlying premise here could both be “you could use this all the time” and the corollary “you don’t use your current devices enough”.
Returning to that question I asked earlier: what problems need to be true in order for this to be the solution? Were I to phrase the three most salient characteristics of the device as a problem, I suppose I would say: I use my phone too much, the relationship which I have with my devices is not humane, and I don’t use my devices enough. How do we feel about the world in which this is true and urgent? Are these problems cogent and non-contradictory?
Leaving aside the latter two problem-statements above for the moment, I do find the first of them compelling; I do use my phone too much. It interferes with my life and my happiness. And yet I am presented with a promise that the Humane AI Pin will allow me to “take the full power of AI everywhere”. I have to wonder: in what conception of my daily life do I not already take the full power of AI everywhere with me? I carry a computer with me at all times. I am inundated with emails and texts from chatbots trying scam me, get me to donate, checking in on whether I’m happy with the curtains I bought three months ago. My purchase of coffee beans and organic milk feeds models of my behavior and desires at credit card companies, my grocery store, the manufacturer of my phone. My car, well, my newer car at any rate, very nearly drives itself. My employment depends in no small part on the behavior of innumerable quantitative trading models which drive our stock market and a significant portion of our economy. At what point do I personally need further intervention from a humanized AI? Which of my already innumerable interactions with AI extend my agency? Which world is it in which I think to myself: the problem with my day is that it doesn’t have enough task completion for easily automated tasks? How do I feel about that world? Is the problem of the phone that I look at a screen or is the problem that my life is already saturated with mediation by algorithms and probabilistic models?
This is not a review of the device, as I haven’t seen it in person, nor is it a critique of the device itself. I’m sure that it is in many ways wonderfully engineered and lovingly designed. It is, instead, a kind of thinking that I’m trying to practice when I see products brought into the world. A sort of rehearsal that I want to foster both in my own design practice and in the ways that I engage with designs in the world. Part of design practice is practicing engaging and considering objects and there are innumerable ways in which we can do this and this short little snack of an essay is but one of them. Thank you for reading,