My personal savior

July 6th, 2014

I enjoyed a breakfast of decaffeinated Fair Trade coffee and organic dark chocolate while reading about hippocampal atrophy among born-again Christians and atheists in the southeast United States. Hippocampal atrophy is thought to have something to do with memory loss. The idea is, since most people in the southeast US are ‘born-once’ protestants who make it tough on those who don’t think like they do – born-again Christians and atheists in the region experience long term stress which shrinks hippocampi.

Halfway through my cup I was reviewing some text on how meditation is associated with big, healthy hippocampi. I’ve always liked the idea of meditation, but I’ve been put off by the sorts of people who advocate it. It seems like a sensible practice, which I understand as applying one of myriad methods to temporarily remove conscious content from my mind’s attention. This morning I thought, ‘hey, that’s sort of like praying.’

My lifelong approach to praying has been straightforward. As an early teenager, growing up during a contemporary mantra-repetition fad, I decided to make Jesus’ instruction – the so-called ‘Lord’s Prayer’ – the mantra I would repeat. I liked it not only because I thought I was Christian, but because it’s way too long and complicated to be an acceptable mantra. Ever since, I have inwardly repeated that prayer many times daily, far more frequently than you would imagine.

Well into my twenties, it was quite difficult reciting the prayer without thinking of something else. I would get to ‘give us this day’ or ‘lead us not’ and my mind would wander. I would suddenly be thinking a different thought, and I would have to begin all over. I wanted to pray it through without other conscious phrases occurring, so I had to learn, simply, to ‘not think.’ I went through long periods unable to do this. Inevitably, some idea forced itself into the prayer that I couldn’t shake. Somehow I learned how to lose the idea without shaking it at all, how to avoid without resisting, how to recognize distractions before they arrived. After many years, I learned to think only the prayer.

I have occasionally resented my commitment to this. It interferes with some things. I pray a lot while reading, and I have to pause, clear my mind of whatever I’m attending to, and devotedly repeat my silly prayer. I remember a long interval many years ago when I couldn’t begin the prayer at all without cursing Jesus Christ and the whole ridiculous business. I just let myself do the cursing and went right on with the mantra. Eventually that interruption, too, passed.

And now, lo and behold! Meditation (for perhaps I’ve been meditating all this time) may be saving my hippocampus.

Book review: The Emotion Machine

June 25th, 2014

I’m hoping you know what an ‘app’ is: it’s the marketing term for what we used to call a ‘computer program.’ You might have apps on your phone (if you have a phone). I think it’s the most common name today for what inside a computer actually does something – like show pictures or make a noise.

A guy named Marvin Minsky spent decades inventing smart software and thinking about non-human intelligence. He wrote an influential book in 1988 describing ‘mind’ as a huge number of little, independent apps, each doing simple things with no knowledge of the zillions of other apps in a mind. All the little apps are organized in a sort of hierarchy – a ‘society of mind’ – where they interact and respond to sensations (and messages from other apps) in a way that makes us think we are somebody. His more current book is a free-for-all speculation about how elaborations on his earlier ideas just might describe what makes us conscious.

It’s hard for me to imagine myself as a bunch of little apps all running at once (or to imagine myself as a single program, or a soul, or anything at all, really). It’s probably even tougher if you don’t know a lot about software (which I do). But let me try:

First, imagine that other creatures, less complicated than you are, might not be ‘aware’ of what they’re doing. When a cat sees a mouse, a bunch of impulses and instincts surge through its head – and it does something. Let’s say all these impulses are directly attached to the mechanics of what it does, so we don’t really have to think the cat ‘decides’ what to do. It just gets impulses: – ‘fun!’ ‘food!’ ‘just ate’ ‘dog coming around the corner.’

Once the impulse contest resolves itself in the cat’s head, it either does or does not chase the mouse. I don’t have to believe the cat is ‘thinking’ about what it’s doing, because I can imagine its observations are directly wired to its actions: ‘see mouse, if hungry, chase; see dog, run.’ The cat simply exists – a pure manifestation of its impulses (if you have a cat, you may doubt this. Pretend I’m describing a dog).

Certain cat brain apps receive the messaged image of the mouse. Responsive cat brain apps put muscles into motion. The whole event passes through the cat and disappears. In Minsky’s view, a great difference for humans is much more of the event passes through the brain into memory – and does not disappear. It’s available to the ‘mind’ as if it happens all over again. And, we can imagine, our response apps can react, not just to the physical appearance of a mouse, but to a memory which produces exactly the same responses the original appearance did.

Minsky thinks this matters a lot. Humans have apps that receive perceptions and compare them to remembered responses. The whole process of deciding to chase or not chase becomes much simpler. We review other memories of seeing a mouse – it’s too far away, it sees us already. We can think it through before getting too excited.

Minsky thinks being us is basically about responding to things – lots of  things – like seeing a mouse. We’re alive because we evolved to be here, and what we do is confront events. We have special little apps for just about every kind of event you can think of (well, exactly every kind of event you can think of, actually). We use them to organize memories of responses and compare all our incoming messages to these to help select the best ways to proceed.

Lower level apps process signals from, say, the eyeballs. Higher level apps recognize the mouse. We have seen a number of things that look like a mouse, and recognize this mouse as real by deferring to a higher level ‘recognition’ app. The recognition app is constantly augmented by history, and responds to messages according to what has been seen before. If the recognition app receives messages describing something unidentifiable, it sends its own messages to still higher level apps that review and compare memories of other things, selecting some action to best assess the significance of the something new.

In doing these comparisons, our apps accomplish a great deal without ‘thinking’ at all. Anything that resides in memory and can be managed by familiar patterning requires no further attention. Our brains are essentially lots of different recognition apps, waiting for and responding to messages. But lots of events don’t exactly match anything we remember. Responding to the streams of unfamiliar messages requires a great many comparisons and ‘best’ selections. The collective sensation of all this activity becomes, in humans, ‘thinking.’

The relatively few apps at the very highest level manage very complex patterns indeed. They draw on representations of entire people held in memory, in myriad contexts, and they respond to events by selecting behaviors appropriate to the memories associated with the experience at hand. They review available knowledge and they ‘decide.’

Now, ‘deciding’ is about acting based on what we know. Since Minsky is a computer programmer, he believes we’re rational, and if a cat makes the decision to jump on a mouse, it’s because that is the most likely way to successfully eat it. Our apps are rational apps – they do things because there are reasons. The problem is, sometimes – even often – we can’t work through all our memories and process all our comparisons quickly enough to know what our reasons were. We may not even remember the right thing to do. We have to act anyway, under exigency. That is our existence. If we survive, experience gets added to our memory.

As events arise and our apps continue to select amongst options, we develop a growing body of knowledge about experiences whose causes seem unknown. We don’t have time or information enough to do something we understand, but we select action anyway. As these memories accumulate and become organized, we search for a label to apply to the recurring, ‘unknown’ cause – the absence of rational impetus – from which so many actions nonetheless seem flow. Our apps attempt to treat this unknown as a cause in itself. we call it ‘I.’ We call it ‘me.’

Minsky’s a scientist. He believes everything does, really, have a determinable cause. That is, if you were going to roll dice, and you really knew, exactly, all the initial conditions about the dice, and all the forces in all the directions that would be applied as the dice were rolled – then you really would know how the dice would land. In the same way, if you really had sufficient time and information, you could always make the ‘right’ decision. But you don’t. You get tired of thinking, or frustrated, or desperate, and you act. So the act gets stored with all those whose causes are unknown.

Responding to these difficult events is costly. Our minds are complicated. They confront a complicated world. Some problems are, indeed, intractable to straightforward, logical thinking. Minsky thinks confronting such events triggers so many simultaneously chattering apps that these ‘cascades’ themselves have been organized into ways of thinking – emotions. He views emotions as highly evolved choice mechanisms for dealing with the most difficult situations. Almost necessarily, the sources of such actions are assigned to the unknown. It’s just too difficult to explore and manage all the facts behind such decisions.

So in Minsky’s view, ‘free will’ is simply not knowing how you made up your mind. He is, of course, marketing a few decades worth of personal research and theorizing, and he wouldn’t mind sensationalizing it a little. He likes to repeat that the traditional scientific approach of searching for simple, unifying theories just can’t work with human consciousness, because it’s made up of too many moving parts. But then, he announces his own simple theory: that it’s a hierarchy of moving parts.

The wonderful thing about scientific ‘consciousness’ theories, I find, is they construct something comprehensible to explain being me. Religion is incomprehensible (I am a Christian). Mysticism is, of course, mystical. There are suggestions a lifetime given to certain sorts of contemplation will reveal the truth. Naturally, it’s impossible not to be suspicious of those. So science is wonderfully concrete: take these building blocks and arrange them in these shapes, and you will always produce a mind.

Still. Though. Yet. Lots of intense little processes flipping lots and lots of little switches. We can display representations of these with flashing dots and whirling circles, and we can say, ‘that’s what thinking looks like.’ But there’s that leap: awareness doesn’t reach backwards into an impression of its own mechanics. It hasn’t proven to itself (that is, to me) it is what it claims to be.

 

There are such things as misperceptions.

June 17th, 2014

I’m at the ‘nursing’ facility every other day now, visiting my mother. It’s boring and she’s unhappy. She’s been in for a while and she wants to get out of her room. So we walk the rectangular corridor together. Well, she rides. I walk. I’m getting to know the people. The more active elderly move slowly about in wheelchairs, though some seem to come into the hall just to sleep.

It’s very interesting society, like real life amplified. The people around in fact might be for only a few days, and certainly won’t be for long. They often die in the bed next to you. There are eighty or ninety residents, all old and suffering all sorts of things. There are many ‘staff,’ who are young and suffer only the frustrations of managing a bunch of old people. It’s not a highly paid job, but the staff are always friendly and positive to third parties. I am a third party.

Circumstances are peculiar. You and I expect, if we’re frustrated, aroused, or frightened – to be able to do something about it. The old folks can’t. They might have trouble simply expressing those emotions in believable ways. The staff are provided with charts and schedules, and literally compelled to administer their charges through minor gymnastics, mealtimes, and bathroom visits at any and all events. Inevitably, whether the residents are expressive or not – so what?

Whole vistas of human sociability are compressed between a few walls and regular deadlines. The luxury of working through issues is buried history. If it’s time for your routine administration, it’s time, and you’re in for it, squawking and flailing though you may be. Of course, neither residents nor staff enjoy unhappiness, so a set of rules have been devised. The residents are treated and spoken to like children. The staff are cheerfully helpful as can be under the circumstances. The residents hurl occasional ignored insults and nod off in the corners.

The people who live here are mostly quite old. A few, past one hundred. All the weeks I’ve been coming now, a skinny woman with straight, pure white hair has been in the same place, bent forward asleep in her wheelchair off to one side just beyond the front lobby’s double doors. My mother wants to get out of her room, so we travel the big rectangle together: turn left out of her doorway, left again up the corridor to where ‘Station A’ faces the lobby entrance. The skinny woman sleeps off to one side in her wheelchair.

Left again. Left again. Left again. We’re back where we started. We vary the trip by reversing direction and turning right, right. At the third right the skinny woman is still asleep. We do this a lot. My mother is friendly, and she’s getting to know some of the adventurous types who don’t spend the days in their rooms. She has some fun on these trips.

One day, making our third or fourth circuit, we found ‘Station A’ unstaffed. That is, there was no staff at Station A. I noticed this, since I had never seen it before (there’s quite a lot of staff, and always staff at Station A). Then I saw movement behind the wide service counter. There was a trash basket at the end of some filing cabinets and the skinny woman who always slept in her wheelchair had rolled herself to the basket and was rapidly digging through it. Even as I watched, she finished her sifting and scooted backwards into the hall.

Keep killing until the fighting stops.

June 13th, 2014

War isn’t light-hearted, but you can talk about it any way you want. I remember when the US poured a trillion dollars into Iraq to prevent what’s happening today. Today, no-one thinks the money was really for Iraq.

In another part of the world this year, the UN held its first conference on whether robots should be prevented from independently deciding whether to kill people. Really. This is now, folks.

It seems Japan is the leading producer of autonomous thinking machines with limbs. They send them in to their melted nuclear power plants to fix things where mere humans would disintegrate. They train their machines to think and dispatch them with objectives.

It’s easy to imagine how the US could apply this ‘technology’ to the Middle East. Is everybody ready?

Cheery. Oh!

June 6th, 2014

Joy isn’t fiction. It sure doesn’t feel like it’s in a container. I’m never faking it.

Lately it’s been commented about me: ‘he believes everything he reads.’ Imagine! But I do make a deliberate effort to suspend my prejudices. I participate as fully as possible in any mindset the book in front of me puts forward. When it’s complicated – as consciousness, intelligence, and cognition sort of are – when it’s complicated I’m guilty of pontificating from other authors’ points of view. I guess I too fully absorb. Maybe I sound too genuine.

The thing is, we are all identities unmoored. We don’t understand ourselves. We don’t know how we came to be. Most of the time, I’ll venture, we don’t care. I’ve taken an interest because I think there’s a plausible opening here for the masses to be enthralled by a new Explanation of Things. It’s possible we’ll be offered a general redefinition of the meaning of life – and I’m wondering what it might be.

Meantime, if you see me – I bet I’ll still make you laugh.

Being. Important?

May 26th, 2014

I’m enjoying a pair of fried eggs over melted swiss cheese, on top of a slice of my home-made bread. I’m thinking about why the artificial intelligence crowd seems different to me. Different, that is, from all the other ‘crowds’ of idealists who search for ‘meaning’ in our lives.

I often read there’s a distinction between the ‘instinctive’  behavior of animals and the thoughtful approach we ourselves take. But I think the many efforts to identify meaning from within ourselves, with all the high language and ironic half-smiles, have similar results, empirically assessed, to not thinking at all. From the outside, given our consistent inabilities to change ourselves, we might as well be armadillos.

Some of us are fascinated with ‘consciousness’ – with ‘self-awareness’ – as if it’s special. Of course, since we’re the ones making it up, we can never tie down exactly what it is. The artificial intelligence people couldn’t give a hoot. They’ve defined ‘intelligence’ as the behavior of any rational agent. That is, if you place a phone call to a machine, and you can’t tell it’s a machine – it’s intelligent.

‘Consciousness’ is just another aspect of intelligence, which exists in its outward appearance – it gets measured. The AI people took ‘identity,’ stopped obsessing on it, and started building varieties of their own. In so doing, they’ve begun creating things that look a lot like us. The difference is: they’ve got a handle on what they’ve wrought.

Why they call it…

May 16th, 2014

I want to be gentle about this. I might lose you if I get excited. People are building astonishing technologies. A cellphone can be modified to work as a ‘sensor’ that detects and records body motions and vocal dynamics (deliberately, speech content is ignored – it’s irrelevant). These can be assembled into predictable ‘metrics,’ and used to match you with compatible individuals who share your gesticulations and intonation patterns. This works. Corporations assign teams this way. Cooperation improves markedly.

Today’s cellphone, as commonly used, transmits sufficient data to model individual behavioral patterns and report inconsistencies. Hospitals are exploiting this now, detecting when psychological ailments manifest symptoms – and suggesting treatments. An email advising ‘take a pill’ is cheaper than an outpatient drop-in.

We’re all being taught how indispensable our new walkie-talkies are; how you pull your device out of your bag and it knows from the time of day you are hungry and, because it knows you, where you probably want to eat. It recommends the place, and at a word, reserves a table.

You can’t begin to imagine.

No new ideas or inventions are required. The equipment is being installed. Like everyone else alive, you are surrounded by a global omniscient presence, paying personal and loving attention to you.

…Gawd

Due complement

May 5th, 2014

If we have an undoing, it will have to do with quantum physics. Everything does. These are the tiny tiny machinations underlying boxes and springs and tea leaves and eagles’ wings. If you control them, you can turn mountains into chocolate pudding. The high priests control them.

Like all mysteries shrouding real power, quantum physics now has a magical story and a popular legend. My son announced to me the other day, ‘things don’t exist until we perceive them.’

A glorious fallacy, indeed, urged on us in the marketplace for talkative telephones and cars that drive themselves. You’ve probably heard it yourself: somewhere in the convoluted science of the very small, you’ll encounter the bizarre difficulty of not being able to measure the speed of something while at once knowing where it is. This is because we measure things with light, and when you’re very small, the light ‘thing’ arriving to ‘see’ you actually pushes you out of the way. Tricky stuff. Confusing.

There’s a wonderful story about how people first described this stuff, when their eyes opened wide enough to start seeing it, about a century ago. A handful of brilliant physicists and mathematicians literally reformulated the foundations of the universe – and argued like crazy for twenty-five years while they did so.

One of the brightest of these was a man named Niels Bohr, who came up with his own way of describing how the seemingly impossible can really be true. He changed the definition of the word, ‘phenomenon.’ The problem is: things quite small can be described as waves, and all the mathematics work. They can be described as particles, and all the mathematics work. But those mathematics definitely don’t work with each other.

So Niels said, a ‘phenomenon’ isn’t just a thing you see – it includes the tools you use to see it with. Everybody knows that today, but somebody had to make it up. So both ways of looking are accurate – entirely correct. But incomplete. Niels coined the term ‘complementarity’ for this seeming paradox resolved. To understand very small things, you need to understand two mutually exclusive ways of thinking. Mutually exclusive, but not opposed. In fact, complementary.

Niels was a wise man, and people thought him wise. He acquired honors and awards and distinguishments by the cartload. He was deemed something of an oracle. He observed ‘complementarity’ might usefully apply to other realms of experience and thought. Someone asked, ‘what’s the complement of truth?’

Niels replied, ‘clarity.’

 

Sure, I still get the itch.

April 30th, 2014

For fifty years I’ve been astonished by the differences between what is and what it seems. Over and over again. I suppose people aren’t exactly wrong, really – we’re just preoccupied with superficialities and diversions.

So it is that, yes, we’ve got some high-profile conservative bigots mouthing the established Republican outlook with atypical clarity – and sure, the Democrats are serious when they argue over money.

All this is sort of refreshing, I guess. But the fact is, we’re still just talking about headlines, as if we’re in a democracy, when the only meaningful questions concern the unlikelihood of ever getting democracy back.

Proof

April 20th, 2014

If the third temple is built, and nothing changes, the devout will immediately produce a watertight explanation – and continue praying.

I say this because I’ve been thinking about ideas from an on-line psychology course I’ve been auditing. I didn’t study psychology in college, since it seemed obvious to me we exist only as composites of the groups which surround us. I’ve been enjoying the sensation of receiving a variety of pop-culture truisms as something more ‘meaningful’ – as ‘scientific findings.’

We’re all quite aware we’re not going to change our opinions. Not for nothing, really – and we don’t. The ‘why’ from cognitive science is that it’s difficult to incorporate contradictory information into a ‘successful’ world view. This is a brain function. It’s just really hard for us to do it. On the rare occasions we do, it seems, it’s because someone congenially offers the info as part of a more attractive viewpoint itself. We might accommodate our prior errors if we’re given a more productive forward story line.

So, the temple failure won’t change opinions because it isn’t part of a better story. So, too, people in the U.S. won’t modify their views of government – because they know we have a Constitution, they know we hold elections, and they know the law is the will of the people. If they observe the rich getting away with murder it doesn’t matter – that evidence is too difficult to incorporate into the ‘fact’ we’re all equal before the law.

My son participates in a belligerently atheistic pop culture which seems to be growing in the country. Being human, adherents look for the ‘true’ story of their experience, and often seem to counterpose ‘evolution’ and ‘science’ against the larger frameworks of ‘religion’ and ‘democracy.’

There’s now science (of a sort) which proves the U.S. is not a democracy at all. I haven’t paid much attention to this, but it may be transformative.  In the new social psychology, ‘effective’ truths are derived via rational method and conclusion. So the ‘scientific’ declaration we are not a democracy (call it ‘oligarchy’ if you must, but a better label must be waiting) – appears final and meaningful. The declaration is part of a new description which is powerful and predictive. It will be personally useful to recognize you’ve been dis-empowered. It’s part of a more accurate (better) world view.

Over the last decades, the story unfolded in bits and pieces, developing its own cohesion. It’s no longer my primary interest: may the winds of history continue blowing far overhead; may I be left in peace until I am gone. But historically, societal transitions have rarely been comfortable.

Since I do celebrate, today, the return of an original social revolutionary, I’ll just quote him.

‘There shall not be left one stone upon another, that shall not be thrown down.’