Book review: The Emotion Machine

I’m hoping you know what an ‘app’ is: it’s the marketing term for what we used to call a ‘computer program.’ You might have apps on your phone (if you have a phone). I think it’s the most common name today for what inside a computer actually does something – like show pictures or make a noise.

A guy named Marvin Minsky spent decades inventing smart software and thinking about non-human intelligence. He wrote an influential book in 1988 describing ‘mind’ as a huge number of little, independent apps, each doing simple things with no knowledge of the zillions of other apps in a mind. All the little apps are organized in a sort of hierarchy – a ‘society of mind’ – where they interact and respond to sensations (and messages from other apps) in a way that makes us think we are somebody. His more current book is a free-for-all speculation about how elaborations on his earlier ideas just might describe what makes us conscious.

It’s hard for me to imagine myself as a bunch of little apps all running at once (or to imagine myself as a single program, or a soul, or anything at all, really). It’s probably even tougher if you don’t know a lot about software (which I do). But let me try:

First, imagine that other creatures, less complicated than you are, might not be ‘aware’ of what they’re doing. When a cat sees a mouse, a bunch of impulses and instincts surge through its head – and it does something. Let’s say all these impulses are directly attached to the mechanics of what it does, so we don’t really have to think the cat ‘decides’ what to do. It just gets impulses: – ‘fun!’ ‘food!’ ‘just ate’ ‘dog coming around the corner.’

Once the impulse contest resolves itself in the cat’s head, it either does or does not chase the mouse. I don’t have to believe the cat is ‘thinking’ about what it’s doing, because I can imagine its observations are directly wired to its actions: ‘see mouse, if hungry, chase; see dog, run.’ The cat simply exists – a pure manifestation of its impulses (if you have a cat, you may doubt this. Pretend I’m describing a dog).

Certain cat brain apps receive the messaged image of the mouse. Responsive cat brain apps put muscles into motion. The whole event passes through the cat and disappears. In Minsky’s view, a great difference for humans is much more of the event passes through the brain into memory – and does not disappear. It’s available to the ‘mind’ as if it happens all over again. And, we can imagine, our response apps can react, not just to the physical appearance of a mouse, but to a memory which produces exactly the same responses the original appearance did.

Minsky thinks this matters a lot. Humans have apps that receive perceptions and compare them to remembered responses. The whole process of deciding to chase or not chase becomes much simpler. We review other memories of seeing a mouse – it’s too far away, it sees us already. We can think it through before getting too excited.

Minsky thinks being us is basically about responding to things – lots of  things – like seeing a mouse. We’re alive because we evolved to be here, and what we do is confront events. We have special little apps for just about every kind of event you can think of (well, exactly every kind of event you can think of, actually). We use them to organize memories of responses and compare all our incoming messages to these to help select the best ways to proceed.

Lower level apps process signals from, say, the eyeballs. Higher level apps recognize the mouse. We have seen a number of things that look like a mouse, and recognize this mouse as real by deferring to a higher level ‘recognition’ app. The recognition app is constantly augmented by history, and responds to messages according to what has been seen before. If the recognition app receives messages describing something unidentifiable, it sends its own messages to still higher level apps that review and compare memories of other things, selecting some action to best assess the significance of the something new.

In doing these comparisons, our apps accomplish a great deal without ‘thinking’ at all. Anything that resides in memory and can be managed by familiar patterning requires no further attention. Our brains are essentially lots of different recognition apps, waiting for and responding to messages. But lots of events don’t exactly match anything we remember. Responding to the streams of unfamiliar messages requires a great many comparisons and ‘best’ selections. The collective sensation of all this activity becomes, in humans, ‘thinking.’

The relatively few apps at the very highest level manage very complex patterns indeed. They draw on representations of entire people held in memory, in myriad contexts, and they respond to events by selecting behaviors appropriate to the memories associated with the experience at hand. They review available knowledge and they ‘decide.’

Now, ‘deciding’ is about acting based on what we know. Since Minsky is a computer programmer, he believes we’re rational, and if a cat makes the decision to jump on a mouse, it’s because that is the most likely way to successfully eat it. Our apps are rational apps – they do things because there are reasons. The problem is, sometimes – even often – we can’t work through all our memories and process all our comparisons quickly enough to know what our reasons were. We may not even remember the right thing to do. We have to act anyway, under exigency. That is our existence. If we survive, experience gets added to our memory.

As events arise and our apps continue to select amongst options, we develop a growing body of knowledge about experiences whose causes seem unknown. We don’t have time or information enough to do something we understand, but we select action anyway. As these memories accumulate and become organized, we search for a label to apply to the recurring, ‘unknown’ cause – the absence of rational impetus – from which so many actions nonetheless seem flow. Our apps attempt to treat this unknown as a cause in itself. we call it ‘I.’ We call it ‘me.’

Minsky’s a scientist. He believes everything does, really, have a determinable cause. That is, if you were going to roll dice, and you really knew, exactly, all the initial conditions about the dice, and all the forces in all the directions that would be applied as the dice were rolled – then you really would know how the dice would land. In the same way, if you really had sufficient time and information, you could always make the ‘right’ decision. But you don’t. You get tired of thinking, or frustrated, or desperate, and you act. So the act gets stored with all those whose causes are unknown.

Responding to these difficult events is costly. Our minds are complicated. They confront a complicated world. Some problems are, indeed, intractable to straightforward, logical thinking. Minsky thinks confronting such events triggers so many simultaneously chattering apps that these ‘cascades’ themselves have been organized into ways of thinking – emotions. He views emotions as highly evolved choice mechanisms for dealing with the most difficult situations. Almost necessarily, the sources of such actions are assigned to the unknown. It’s just too difficult to explore and manage all the facts behind such decisions.

So in Minsky’s view, ‘free will’ is simply not knowing how you made up your mind. He is, of course, marketing a few decades worth of personal research and theorizing, and he wouldn’t mind sensationalizing it a little. He likes to repeat that the traditional scientific approach of searching for simple, unifying theories just can’t work with human consciousness, because it’s made up of too many moving parts. But then, he announces his own simple theory: that it’s a hierarchy of moving parts.

The wonderful thing about scientific ‘consciousness’ theories, I find, is they construct something comprehensible to explain being me. Religion is incomprehensible (I am a Christian). Mysticism is, of course, mystical. There are suggestions a lifetime given to certain sorts of contemplation will reveal the truth. Naturally, it’s impossible not to be suspicious of those. So science is wonderfully concrete: take these building blocks and arrange them in these shapes, and you will always produce a mind.

Still. Though. Yet. Lots of intense little processes flipping lots and lots of little switches. We can display representations of these with flashing dots and whirling circles, and we can say, ‘that’s what thinking looks like.’ But there’s that leap: awareness doesn’t reach backwards into an impression of its own mechanics. It hasn’t proven to itself (that is, to me) it is what it claims to be.

 

Comments are closed.