Alex Taylor: What artificial intelligence could be (from 2016)
In our visions of how humans interact with computers and varieties of artificial intelligence (AI), our benchmark for what we want our machines to behave like keeps coming back to a peculiar notion of human intelligence. We build machines with narrowly human properties, skills and behaviours – chess players, Go players, conversational agents, car drivers, robots that look and act like humans.
To me, this seems to be a terribly restrictive idea of intelligence, one that constrains what innovation in machine learning and AI could offer and ultimately how AI might make our lives better, if we were to think a little differently. To be blunt: our clunky ideas of human intelligence, smartness, emotion, and so on, are things we’re so deeply invested in, intellectually and culturally, that they distract us from far more promising possibilities.
To think anew, I want to experiment a little with some questions about what we think intelligence is, and what kind of intelligence we might want in the machines we’re building and that we might eventually come to live with. Through a different way of viewing the challenges, we can ask: what other kinds of intelligence can we imagine?
We build machines with narrowly human properties, skills and behaviors – this is a terribly restrictive idea of intelligence
Let me begin with a parable of sorts, one that might seem to diverge from the topic at hand, but I hope to show it has an apposite lesson.
For decades, animal behaviourists have invested great research energies in assessing whether birds can talk and whether, with that talk, they exhibit higher-functioning cognitive abilities, ones closer to our own. In the laboratory – through all sorts of experimental configurations – mynah birds, parrots, macaws and so on have been pushed and prodded to talk.
What’s hardly surprising is that the results point towards conclusive evidence that birds have a less sophisticated cognitive capacity to other highly evolved nonhumans such as primates, and of course ourselves. In experimental conditions, birds perform badly; in fact, they do their utmost to sabotage the equipment. (This anecdote is discussed in the work of Belgian philosopher Vinciane Despret.) It turns out that mynahs, macaws, parrots, and the like just don’t like to talk under the experimental conditions they are put under.
But everyone knows that parrots talk. Outside the laboratory, it seems that if people invest in developing a relationship with these birds, one in which the needs and desires are understood to be things that are negotiated and developed over time, they can start to talk and they can at the end of a day (literally) end up saying things like: “You be good, see you tomorrow. I love you.”
Now, the point here isn’t yet another argument for or against anthropomorphising birds, animals or even machines. In the case of birds, it’s not whether we can generalise to say that birds are intrinsically like or unlike humans, or whether they can talk like humans. Rather, can we ask what are the conditions through which we can begin to talk with them, and that they might talk back?
Can we then ask what the conditions would be for something akin to intelligence to surface in our human-machine interactions? What questions do we need to ask of those things we interact with to allow an intelligence to surface? It’s this turn to the thinking about the conditions that are created and what might just be possible that invites us to ask some very different questions, questions not about some intrinsic quality of animal or machine intelligence, but of humans and nonhumans altogether, of a wider set of relations and entanglements that bring intelligence into being.
So what might these different conditions be? With questions like these, I think we open ourselves up to a vast array of possibilities, but here’s just one idea to illustrate. I want to suggest that through the infrastructural capacities of vastly distributed systems and the production of data we could begin to see the conditions for difference.
Take the example of the widely reported glitch in IP geolocation from MaxMind, a USA-based digital mapping company that provides location data for IP addresses, the unique number attached to each device connected to the internet. Through a system designed to locate people using their connected devices, we see how certain demographic categories are sustained and cemented: what do people living here buy? Or where do nefarious activities originate? And, in some cases, we see how the technical particularities of such an algorithmic system can give rise to a so-called glitch where, because of their geographic location, people and households are inadvertently accused of criminal activity – as in the case of those unfortunate enough to live in Potwin, Kansas, the geographic centre of the USA, and so the default location for unknown IP addresses. If there’s any intelligence here, it’s invested in a counting and bucketing people into coarse and problematic socioeconomic categories. And all too familiar categories.
What other kinds of intelligence can we imagine?
How could this configuration of geography, people and technology be changed to bring to the fore something else? How might the conditions be altered to see people not in overly simplistic and at times error-prone ways, but in ways that open us up to different and new possibilities? How might they be used, for instance, to understand how people could be counted differently, how new classifications might be found that open us up to the other ways we inhabit and build relations to spaces. And what if the specific algorithmic limitations of the system weren’t bracketed off, and treated as noise, but used to ask who is not being counted here, and how might they be?
I wouldn’t want to pretend to have any concrete or half-baked answers here, but I think we need to take seriously the invitation to ask different questions like these. They will help to bring out an intelligence that is more than mimicry and is invested, instead, in how we hope to live our lives. Thus, we might ask: what is it each of us might learn from using a system that responds, intelligently, to the relations between ourselves and place? What, derived from a panoply of data sources and billions of human-machine interactions, might each of us develop a sense of? How might each of us truly accomplish something, together, with an emerging back and forth of engagements and interactions?
In interacting with an intelligence of this sort, we may not need to know its inner workings, just as we don’t need to know the inner workings of someone else (or for that matter of another species) to talk to them. What we do need are the conditions to actively produce something in common, to bit by bit, as Vinciane Despret says, “result in shared perspectives, intelligences and intentions, resemblances, inversions and exchanges of properties.”
^^^
This article originally appeared in The Long + Short and is reprinted under a Creative Commons license. Read the original article here.
Alex Taylor spoke on this topic at the Nesta event, Human, meet computer.
Image by Steven S. Drachman; model, Amelissa Oblige