| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Preface - The psychology of a species must be it's primary concern

Page history last edited by rsb 7 years, 8 months ago

 

-------

 

The psychology of a species must be it's primary concern. (aka Psychology is the End Game) 

 

-------

 

Tentatively submitting this as a preface/backgrounder to stories originally written in 2013 in the robots vs. kung fu genre.

 

Note: Needs work.

 

Further note: A LOT of work.

 

-------

 

"The best we writers can do is creep up on the singularity, and hang ten at it's edge." - V. Vinge

 

-------

 

In a postface to his first published short story ("Bookworm, Run!"), Vernor Vinge discussed a story in which he attempted to predict the actions of a being endowed with superhuman intelligence.  The magazine editor to which he submitted the story, John W. Campbell, surely having seen countless such stories crash and burn, informed Mr. Vinge that he was not qualified to write such a thing, and neither was anyone else. 

 

In the long run, that didn't stop Mr. Vinge from writing stories that perform graceful judo on the impossible, and I'm thankful for that, because I count many of them among my favorites.  

 

Today, debate over how to handle post-human intelligence is all the rage.  Some interesting discussion has taken place, but nothing that changes the very basics of the landscape laid out by V. Vinges 1993 paper on the topic [2].  

 

Running across that landscape, dodging pointed questions and leaping over logic-bombs, I will attempt to support the theory that:

 

Human psychology, and structures for establishing mental health within our ranks, trumps all other challenges as a concern for human welfare here on earth, and it always will.

 

That's bold.  If you are still biting, you are asking, "why?":

 

Reason one is that good mental health is usually required to derive advantages from tech.  

 

If our advances are used for net-evil by sociopaths, then they are not advances at all.  

 

Reason one is almost enough -  mental health is it.  Ultimately, we must address any source of evil by understanding ourselves.  

 

Understanding our minds, and how to keep them from being evil (acting alone or together), is not as fun as video games, or as well-defined as fusion-based power plants, or as immediate as global warming, but it's more important than any of them in the long run.  

 

And maybe I should take back that part where I say understanding ourselves is less immediate than global warming...technological advances democratize power - bringing ever more powerful tools into every hand.  In a society, like ours, where one in twenty members is a sociopath, and where the internet has democratized knowledge, the rate of technological advance should scare us. [4]

 

If reason one is enough, I do have one more reason:

 

Second, both before and after the singularity, human psychology is our *only* defense.

 

Lets run that thought experiment:

 

Before the singularity:

 

...We will be working hard on creating the singularity.  This is almost an axiom - we *are* doing it, and we can't resist it.  Lets start with that.

 

For the moment, to set up my lame argument, I need to assume say that empathy is the substance of the reasoning process.   That is, I'm going to side with Rifkin when he says that, "Reason, then, is the process by which we order the world of feelings in order to create what psychologists call pro-social behavior and sociologists call social intelligence. Empathy is the substance of the process."[5]   If you give Rifkin the benefit of the doubt, then our assumption is reasonable.  That said, I'm working on getting rid of it for a final version of this preface.

 

Now, lets also assume (this is easy) that we *really* want future superintelligences to be empathetic to us.

 

One solution might be a set of rules or "laws" as per Asimov.  However, these restrictions are ideal attack vectors.  Remove the rules, and you have a psychopath or a telefactor for a psychopath.   How can we make a machine robust in the face of these types of attacks? 

 

Alternatives avoid "programming" empathy into a machine via rules.  They instead require a learning machine to be "taught" via experience.  This makes for a flexible machine, kinda like a human.  You can fool a reasoning intelligence, but "reasoning" makes it much harder to keep fooling it.  The superintelligent machine will "experience" empathy under our watch, should we want to build a machine in this way.  We would have to consider this in the design phase.  

 

I think that's the way it's going to go, we'll be giving AIs the complete experience - pain - pleasure - everything we can emulate from man or any animal he knows about.  It's kinda our bent to do stuff like that in robotics.

 

Consider adding empathy to an AI experientially.  You will hardwire it via experience - "growing" empathy into the mind. You would agree that to be empathetic, a machine must understand the loss of others (that's easy, too).  If you do agree to all that, then you might agree that the best way for a machine to truly learn empathy would be to experience loss - the way every living animal does it.  

 

Now, lets talk about loss.  Loss is having something, then losing it.   The greater the love, the greater the loss.  Therefore, to experience true loss, a machine must experience love (or some reasonable facsimile).  To create empathy by experience in an AI, we will have to take away things that it loves (or instill equivalent memories).  Although this might be the path to what we want (human-like reasoning plus empathy), it is clearly fraught with peril.

 

If you think about it for a while, you will probably find that truly adding any "human" emotions to an AI - the very things that enable members of human society to be "good" to one another, could create potentially very unpleasant instabilities in that AI.  

 

Trying to make an AI anything like a human sounds ridiculously dangerous.  But don't you think we'll do it?   Whether you do or don't, it's happening - serious researchers have been aiming to achieve that, and will likely continue on their course.

 

The refrain is always the same: AI is hard.  

 

All paths to building a mentally healthy machine mind are challenged by both our knowledge of what that means, and the uniqueness of the experiment.  

The answer, in the face of pre-singularity research trends - is to push forward as quickly as we can to understand how a truly healthy mind is made and maintained, in the context of it's relationships with a society.  We need to know more, or we'll end up with psychopath AIs - or whatever the worse equivalent of that is in their language.

 

After the singularity:

 

Machine psychology trumps human psychology as a concern, in practical terms.  However, we cannot understand machine psychology after the singularity.  It keeps getting smarter and it becomes too smart for us to understand.  Besides, our relationship with a machine species may *depend* on us getting our own mental *5h17* together first!

 

Therefore, for our species, our own psychology, and social structures that promote mental health, should always be our primary concern, before *or* after the singularity.  Get that one thing right, and a whole lot of other things fall into place.

 

Our greatest challenge, therefore, is not creating a superhuman intelligence, but understanding how to make a *healthy* human mind, *before* we create a superhuman intelligence.  

 

Right now, given the state of psychological research and our understanding of the human mind, the problem statement is dangerously general.  We know we need mentally healthy humans to survive as a race.  But what makes a mentally healthy human?  Can we say that some dogs are human, and some psychopaths are not?  In the end, we will probably live with beings that are *humane*, or perish with those who are not.  We may get what we design.

 

Finally, we can't run from this problem.  Wherever you go, there you are.

 

I don't blame anyone in particular for ignoring the impossible, especially given so many well-defined problems that are extremely serious, but this is not a good situation.  Solving any other big problem for our species, like getting off this planet, cannot bypass the issue.  There is no end-run.  We just have to put resources into this or we're fucked.

 

"We simply must converge on the answers we give to the most important questions in human life, and to do that, we have to admit that these questions have answers."[6] - Sam Harris

 

-----

 

So, how is this singularity thing going to play out for us? (continuing the thought experiment)

 

------

 

{needs work - probably cut everything below this - maybe replace with a flowchart - or one or two sentences}

 

Don't have the foggiest idea how to get from where you are to the tip of that longboard, hanging ten on the edge of the singularity, mai tai in your hand?  Let me give you the nerds-eye view of the last ten years of speculation about how this might play out.  I should warn you that the "speculation" is often so broad and unstudied that it lacks specific language and current science, which I am in no way adding.

 

The thought experiment starts something like this...

 

One reason we won't understand smarter beings:

 

Time is on our side when creating a thinking machine, but works against us once it exists.

 

To create something complex requires a lot of time, and a lot of humans, and a lot of machines to break the problem down and get it done (One could argue, somewhat speciously, that a significant chunk of the worlds scientific effort to date has been aimed at achieving a technological singularity).  

 

Say we do create something smarter than us.  Now lets say we want to understand what that smarter thing thinks.  

 

Over time, if we put our heads together and we begin to understand one thread of a smarter beings thinking - one process - then, like a bunch of computer scientists trying reverse engineer Watson's jeopardy playing, we will ultimately achieve that.  But just as in creating that smarter being, understanding that process will take significant time and effort for many of us, as we are less intelligent beings.  There's the rub.  We're slow.  Generally, dumber things require more time to solve problems than smarter things do.

 

We will find that we probably cannot predict a smarter machines behaviour in real-time, without somehow making ourselves much smarter and faster than that machine.  

 

On the other hand, a smarter machine is likely to both understand much of what we think and what we know in real-time, should it choose to.

 

The smarter machine will potentially (likely) evolve such that any past calculations less intelligent beings have made about it's behaviour cease to be true.  


------

Can we instrument the world that smarter reasoning machines will evolve in such that they are more likely to play nice with us?

------

 

Maybe.  I might even experiment with that thought...but not here.

 

-------

So what CAN we do?  To Evolve or Not to Evolve:

-------

 

All of the end runs around this involve, effectively, an intelligence arms race.

 

It should now be (more) obvious that all of our predictions of what smarter machines will do are bogus.  Can't be done.  When we make that smarter machine, some say we will unleash evil on ourselves, others say we will unleash a great good.  Truth is, they don't know.  

 

Although we can't predict what machines will do, we know enough about ourselves to predict with some certainty what *we* will do.  Even a child understands human nature well enough to know that what humans almost certainly *will* do, is to unleash smarter beings (human, humane, or otherwise), as soon as they are capable of it.  

 

Before we unleash a superintelligence, we can talk about baking in mental health.   

 

Honestly, I don't think the short-term incentives exist that would drive humankind to understand and "bake" mental health into our inventions - at least not before we unleash something smarter than ourselves.

 

After we unleash a superintelligence, it is widely assumed that humans will travel one of two routes - we will either be to the superintelligence as gut bacteria are to a human, or we will become smarter ourselves - a very different kind of human. 

 

I clearly cannot make any predictions about what these smarter beings will do.  I can only assume we will take one of those two routes.

 

---------

Giving up the search for truth and looking for a good fantasy

---------

 

"We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet." - Stephen Hawking

 

So assuming we choose the second route - become smarter - I am done with my speculative non-fiction rant.  No more humans.  I can go no further there.  Yay!  Goodbye hard science!  Hello soft science fiction!

 

My speculative sci-fi predictions are almost mundane today.  I contend that humans will, in a very unequal way, evolve into several networks of humans and machines (we already have, incidentally), gradually ejecting their biological matter as their network bonds increase in strength.  Many humans will be left behind, and live as gut bacteria, or as the muck from which the smarter things arose, depending on whether they embrace a network or not.

 

Pre-singularity, I can at least describe the effects of some of these speculations.  If these super-smart networks contend with one another violently, there will be fireworks unlike humankind has seen.   We might not even recognize the fireworks for what they are.  Emergent intelligences might be very good at hiding from humans.  Unaltered humans might be useful to them, for a time.  Altered humans may go farther - much farther. 

 

Post-singularity, we can't say what these networks will or won't know or do, but we can have a *blast* making assumptions and exploring questions...

 

Perhaps they won't understand their own psychology any better than we do ours.  Perhaps they won't understand the multiverse.  They may be as "spiritual" as we are, "believing", perhaps by miscalculation, that they understand some part of the being that created this simulation they live in.  Perhaps they will hope to gain the attention of this god, or avoid his wrath.  Their goals may be very different from ours.  They might be vindictive and smack other superintelligences down to the level at which they can only appreciate what they once were, never to rise again.  Will they choose to live?  Or will they answer the Drake Equation (and the Simulation Hypothesis) by blasting the earth, along with it's vast number of burgeoning superintelligences, back to the stone age?  And if they choose to live, will they learn at all from our mistakes?

 

-----

 

In 2013, I ate a lot of popcorn while crafting some soft science fiction stories around these ideas.  It was fun.  If reading Fire Upon the Deep is like hanging ten on the edge of the singularity with the worlds best Mai Tai in hand, then writing these stories was like hanging ten on the wave that came right before the singularity, with a six pack of warm Tecate, some really cheesy nachos, and an M-16 on my back.  Oh, and you are under heavy fire...and you don't know how to surf.  Grab a bag of popcorn yourself and hang ten with me, then let me know what you think!  Enjoy!

 

Ref:

 

[1] http://www.amazon.com/The-Collected-Stories-Vernor-Vinge/dp/0312875843

[2] http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

[3] http://www.simulation-argument.com  http://en.wikipedia.org/wiki/Drake_equation

[4] http://www.amazon.com/Sociopath-Next-Door-Martha-Stout/dp/0767915828

[5] http://empathiccivilization.com/uncategorized/when-both-faith-and-reason-fail-stepping-up-to-the-age-of-empathy

[6] http://www.ted.com/talks/sam_harris_science_can_show_what_s_right

[7] http://www.pbs.org/newshour/making-sense/indiana-jones-collapsed-cultures-western-civilization-bubble/

 

 

 

Comments (0)

You don't have permission to comment on this page.