| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Preface - The psychology of a species must be it's primary concern

This version was saved 9 years, 3 months ago View current version     Page history
Saved by rsb
on December 26, 2014 at 12:52:03 am
 

 

 

Note: This is a Work In Progress

 

"The best we writers can do is creep up on the singularity, and hang ten at it's edge." - V. Vinge

 

-------

 

The psychology of a species must be it's primary concern. (aka Psychology is the End Game) 

 

-------

 

In a postface to his first published short story ("Bookworm, Run!"), Vernor Vinge discussed a story in which he attempted to predict the actions of a being endowed with superhuman intelligence.  The magazine editor to which he submitted the story, John W. Campbell, surely having seen countless such stories crash and burn, informed Mr. Vinge that he was not qualified to write such a thing, and neither was anyone else. 

 

In the long run, that didn't stop Mr. Vinge from writing stories that perform graceful judo on the impossible, and I'm thankful for that, because I count many of them among my favorites.  

 

Today, debate over how to handle post-human intelligence is all the rage.  Some interesting discussion has taken place, but nothing that changes the very basics of the landscape laid out by V. Vinges 1993 paper on the topic [2].  

 

Running across that landscape, dodging pointed questions and leaping over logic-bombs, I will attempt to support the theory that:

 

Human psychology, and structures for establishing mental health within our ranks, trumps all other challenges as a concern for human welfare here on earth, and it always will.

 

Why?:

 

Reason one is that good mental health is usually required to derive advantages from tech.  

 

If our advances are used for net-evil by sociopaths, then they are not advances at all.  Ultimately, we must address the source of evil by understanding ourselves.  That's not as fun or well-defined as fusion-based power plants, or as immediate as global warming, but it's more important.  Technological advances democratize power - bringing ever more powerful tools into every hand.  In a society, like ours, where one in twenty members is a sociopath, that should scare us. [4]

 

Second, both before and after the singularity, human psychology is our *only* defense.

 

Before the singularity, we will be working hard on creating the singularity.  I propose that it is critical to our future as a species that we understand human psychology and apply said understanding to aforementioned work.  I'll set that up with an example.  

 

Say we side with Rifkin when he says, "Reason, then, is the process by which we order the world of feelings in order to create what psychologists call pro-social behavior and sociologists call social intelligence. Empathy is the substance of the process."[5]  Say we *really* want superintelligences to be empathetic.  

 

Rifkin would likely tell us that we can't simply "program" empathy into a machine.  We may have to hardwire it so that the superintelligent machine will "experience" empathy, without the ability reprogram that.  Indeed, if we want to enforce that behaviour, we would have to consider that in the design phase.  

 

Consider adding empathy to an AI experientially.  You will hardwire it via experience - "growing" empathy into the mind. You would agree that to be empathetic, a machine must understand the loss of others.  You might agree that in order to do that, a machine must experience loss.  To experience loss, a machine must experience love (or some reasonable facsimile).  Therefore, to create empathy by experience in an AI, we will have to take away things that it loves (or instill equivalent memories).  Although this might be the path to human-like reasoning, it is fraught with peril.  If you think about it for a while, you will probably find that adding any "human" emotions to an AI, the very things that enable members of human society to be "good" to one another, create potentially very unpleasant instabilities in that AI.  

 

Now consider NOT adding empathy to an AI.  Our best solution might be a set of rules or "laws" as per Asimov.  Restrictions that are ideal attack vectors.  Remove the rules, and you have a psychopath or a telefactor for a psychopath.

 

AI is hard.  All paths to building a mentally healthy machine mind are challenged by both our knowledge of what that means, and the uniqueness of the experiment.

 

After the singularity, machine psychology trumps human psychology as a concern, in practical terms.  However, we cannot understand machine psychology after the singularity.  Besides, our relationship with a machine species may *depend* on us getting our own mental *5h17* together.  Therefore, for our species, our own psychology, and social structures that promote mental health, should always be our primary concern, before *or* after the singularity.  Get that one thing right, and a whole lot of other things fall into place.

 

Our greatest challenge, therefore, is not creating a superhuman intelligence, but understanding ourselves before we do.  

 

Right now, given the state of psychological research and our understanding of the human mind, the problem statement is dangerously general.  We know we need mentally healthy humans to survive as a race.  But what makes a mentally healthy human?  Can we say that some dogs are human, and some psychopaths are not?  In the end, we will probably live with beings that are *humane*, or perish with those who are not.  We may get what we design.

 

Finally, we can't run from this problem.  Wherever you go, there you are.

 

I don't blame anyone in particular for ignoring the impossible, especially given so many well-defined problems that are extremely serious, but this is not a good situation.  Solving any other big problem for our species, like getting off this planet, cannot bypass the issue.  There is no end-run.  We just have to put resources into this.

 

"We simply must converge on the answers we give to the most important questions in human life, and to do that, we have to admit that these questions have answers."[6] - Sam Harris

 

-----

 

So, how is this singularity thing going to play out for us?

 

------

 

Don't have the foggiest idea how to get from where you are to the tip of that longboard, hanging ten on the edge of the singularity, mai tai in your hand?  Let me give you the nerds-eye view of the last ten years of speculation about how this might play out.  I should warn you that the "speculation" is often so broad and unstudied that it lacks specific language and current science, which I am in no way adding.

 

The thought experiment starts something like this...

 

We probably *can* make something smarter than us, but we are not smart enough to write convincingly about what something significantly smarter than us would do.

 

This is partially because time is on our side when creating a thinking machine, but works against us once it exists.

 

To create something complex requires a lot of time, and a lot of humans, and a lot of machines to break the problem down and get it done (One could argue, somewhat speciously, that a significant chunk of the worlds scientific effort to date has been aimed at achieving a technological singularity).  

 

Now that we have made something smarter than us, lets say we want to understand what that smarter thing thinks.  

 

Over time, if we put our heads together and we begin to understand one thread of a smarter beings thinking - one process - then, like a bunch of computer scientists trying reverse engineer watson's jeopardy playing, we will ultimately achieve that.  But just as in creating that smarter being, understanding that process will take significant time and effort for many of us, as we are less intelligent beings.  There's the rub.  We're slow.  Generally, dumber things require more time to solve problems than smarter things do.

 

We will find that we probably cannot predict a smarter machines behaviour in real-time, without somehow making ourselves much smarter and faster than that machine.  

 

On the other hand, a smarter machine is likely to both understand much of what we think and what we know in real-time, should it choose to.

 

The smarter machine will potentially (likely) evolve such that any past calculations less intelligent beings have made about it's behaviour cease to be true.  

 

All of the end runs around this involve, effectively, an intelligence arms race.

 

It should be obvious that all of our predictions of what smarter machines will do are bogus.  Can't be done.

 

When we make that smarter machine, some say we will unleash evil on ourselves, others say we will unleash a great good.  Truth is, they don't know.  

 

We can be confident, however, in at least one thing.  Even a child understands human nature well enough to know that what humans almost certainly *will* do, is to unleash smarter beings (human, humane, or otherwise), as soon as they are capable of it.  

 

Before we unleash a superintelligence, we can talk about baking in mental health.

 

After we unleash a superintelligence, it is widely assumed that humans will travel one of two routes - we will either be to the superintelligence as gut bacteria are to a human, or we will become smarter ourselves - a very different kind of human. 

 

I clearly cannot make any predictions about what these smarter beings will do.

 

---------

 

"We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet." - Stephen Hawking

 

So assuming we choose the second route - become smarter - I am done with my speculative non-fiction rant.  No more humans.  I can go no further there.  Yay!  Goodbye hard science!  Hello soft science fiction!

 

My speculative sci-fi predictions are almost mundane today.  I contend that humans will, in a very unequal way, evolve into several networks of humans and machines (we already have, incidentally), gradually ejecting their biological matter as their network bonds increase in strength.  Many humans will be left behind, and live as gut bacteria, or as the muck from which the smarter things arose, depending on whether they embrace a network or not.

 

Pre-singularity, I can at least describe the effects of some of these speculations.  If these super-smart networks contend with one another violently, there will be fireworks unlike humankind has seen.   We might not even recognize the fireworks for what they are.  Emergent intelligences might be very good at hiding from humans.  Unaltered humans might be useful to them, for a time.  Altered humans may go farther - much farther. 

 

Post-singularity, we can't say what these networks will or won't know or do, but we can have a *blast* making assumptions and exploring questions...

 

Perhaps they won't understand their own psychology any better than we do ours.  Perhaps they won't understand the multiverse.  They may be as "spiritual" as we are, "believing", perhaps by miscalculation, that they understand some part of the being that created this simulation they live in.  Perhaps they will hope to gain the attention of this god, or avoid his wrath.  Their goals may be very different from ours.  They might be vindictive and smack other superintelligences down to the level at which they can only appreciate what they once were, never to rise again.  Will they choose to live?  Or will they answer the Drake Equation (and the Simulation Hypothesis) by blasting the earth, along with it's vast number of burgeoning superintelligences, back to the stone age?  And if they choose to live, will they learn at all from our mistakes?

 

-----

 

In 2013, I ate a lot of popcorn while crafting some soft science fiction stories around these ideas.  It was fun.  If reading Fire Upon the Deep is like hanging ten on the edge of the singularity with the worlds best Mai Tai in hand, then writing these stories was like hanging ten on the wave that came right before the singularity, with a six pack of warm Tecate, some really cheesy nachos, and an M-16 on my back.  Oh, and you are under heavy fire...and you don't know how to surf.  Grab a bag of popcorn yourself and hang ten with me, then let me know what you think!  Enjoy!

 

Ref:

 

[1] http://www.amazon.com/The-Collected-Stories-Vernor-Vinge/dp/0312875843

[2] http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

[3] http://www.simulation-argument.com  http://en.wikipedia.org/wiki/Drake_equation

[4] http://www.amazon.com/Sociopath-Next-Door-Martha-Stout/dp/0767915828

[5] http://empathiccivilization.com/uncategorized/when-both-faith-and-reason-fail-stepping-up-to-the-age-of-empathy

[6] http://www.ted.com/talks/sam_harris_science_can_show_what_s_right

[7] http://www.pbs.org/newshour/making-sense/indiana-jones-collapsed-cultures-western-civilization-bubble/

 

 

 

Comments (0)

You don't have permission to comment on this page.