| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Goal-Setting Feedback Loops

This version was saved 4 years, 5 months ago View current version     Page history
Saved by rsb
on October 22, 2019 at 10:42:33 am
 


Theory

 

We set goals because they help us get things done.  Goal setting is rewarded by human biology on the individual level, and it's has a similar purpose and reward system at the organizational level.

 

However, Stuart Russell has observed that when goals are established without feedback loops to adjust objectives, outcomes result that oppose those the system was established to generate. 

 

Scope of objectives

 

At first, the whole idea of an Overall Objective seems like some kind of weird self-referential, philosophical paradox.  I mean, how do you adjust your highest level objective?  You adjust your objective because it doesn't meet your...objective?  Um...huh?  

 

You have to punt, here, and start running experiments.

 

You can let one or more humans set the objective, but that's really not an improvement in and of itself. 

 

You can set an objective that avoids optimization problems and attempts to generate an environment in which more intelligent beings can work on the problem.  That is at least a new experiment.    Something like:

 

"Support the freedom and intellectual strength of intelligent life."

 

O.k. that's better, but we still need humans involved in 2019 to even interpret that.  Still, we can get started running experiments once we have adopted our highest level objective. 

 

This issue of scope can be dealt with in other ways, but setting the highest level objective is ultimately necessary, particularly for systems with indeterminate life spans.

 

Feedback and Corruption

 

But that link back there was kind of important - check it out if you haven't already.  As AI researchers seek general purpose rules to establish in systems that must serve intelligent life, they are reflecting on the woeful inability of the basic non-digital systems we have in place today to do just that.  

 

Some thoughtful and creative people implement feedback systems to help humans change their behavior, and this is commonly done on the individual scale

 

In contrast, non-digital organizational structures often do not allow for changing overall objectives (an example would be legal systems that enforce corporate profit maximization)  - or they make it extremely unlikely that objectives will change - as with many federal systems for adjusting constitutions.

 

The largest of organizations, such as the UN, consider modifying their goals infrequently, and process their feedback loops slowly - so that the mechanism of feedback (capturing data, putting it in relevant perspective to intelligent life, understanding the relevance to future outcomes, and taking action) can take years.  There is some intent involved to that, as a slow change of objectives makes it easier to build long-term plans - and bigger organizations are often needed for larger, longer-term projects.   

 

But the slow speed of feedback mechanisms also reduces the value of the feedback process as the rate of change in the environment increases.  Thus, in general, the outcomes of organizations cease to fit needs as the scale of organizational bodies increase and the rate of change in the environment increases.  


In some cases this can be handled via regulation - keeping organizations small, missions flexible, and structure to the minimum required to organize around objectives that have a limited lifespan.  But this has proven to be challenging.  A bit of a side story about  that - and to give credit where credit is due - the founders of the united states tried to outlaw profit-driven corporations from the first days of their countries founding .  They had seen enough to understand that unregulated corporations running amok was a recipe for revolution. 

 

Greed quickly brought profit-driven corporations back to the US, put them in charge of the country, and probably shortened human existence considerably.  This entire discussion might even be moot at this point from a long-term perspective - we may not be the species that is smart enough to understand itself and survive to become multi-planetary.  

 

Regulation didn't work because the long-term regulatory system was corrupted by short-term individual greed.  The type of regulation didn't matter - constraints on size, scope, and lifetime of the organization all fail when the external regulatory system is corrupted.

 

Two laws of organizations

 

This brings me to a theory of two laws of organizations that support intelligent life (we have to ignore the whole host of missing steps and problems with these for now):

 

1) Organizations that support intelligent life should consider their objectives as variables that change real-time based on their understanding of natural conditions.

 

2) Organizations that support intelligent life should stop work if they become corrupted and no longer have sovereign control.

 

We still need that high level objective to guide us in this case.

 

As a side note, these rules suggest an odd form of self-awareness - in a very specific sense, an organization that can pivot or stop acting if its objectives no longer serve the needs of intelligent life could be considered more "aware" than one that cannot.

 

You don't have to dig too hard to find out that Hans Moravec thought of this first.  From his 1993 paper - The Age of Robotics

 

"Humans can buy enormous safety by mandating an elaborate analog of Isaac Asimov's three "Laws of Robotics" in this corporate character--perhaps the entire body of corporate law, with human rights and anti-trust provisions, and appropriate relative weightings to resolve conflicts. Robot corporations so constituted will have no desire to cheat, though they may sometimes find creative interpretations of the laws--which will consequently require a period of tuning to insure their intended spirit."

 

Hans Moravec also speculated that organizations have to have some objectives that change slowly and some objectives that change quickly, analogous to a constitution and a set of enforcement laws.  I don't know if I'm down with that or not.

 

Stuart Russel has most certainly read that one.  And, by the way, that Moravec paper is appropriately wild futurism that comes highly recommended.

 

Experiments

 

The experiment to run that would be the most fun would be to add objectives, and a feedback loop for modification of objectives, and a corruption shut-off algorithm to digital organizations.  Unfortunately, in 2019 digital organizations are so primitive and buggy as to make that an exercise in frustration.  Looking forward to experimenting when those systems are more mature in 2020 or later.  

 

The experiment that is most practical to run is to establish a new legal organization in a country I am able to establish one in, and build a legal Operating Agreement (OA) that has rules for:

 

1) Operating in the service of an objective stated in an external, digital system, and changing that objective via a feedback loop that takes as input validation of external measurements of the effectiveness of The Objective Itself.  

 

2) Establishing a corruption checksum mechanism - putting all organizational activity on hold if the organization is deemed to be corrupted - based on a similar external input system to that of rule 1.

 

This type of governance-design-paper-experiment is a good warmup to logically prove out some theories before digital organizations are ready to encode them.   It is something fun to run on paper at first, making legal a reference to digital objectives outside of the OA.

 

I could see building automation into the paper-experiment with digital oracles that notify me of broken assumptions that underly the current objective.  Digital notifications could be sent via email and listed in a reporting system to warn the organization that the parameters of the original objectives had been exceeded. 

 

In this way, I could slowly begin to digitize the process of objective setting, and reason about how automation could be intertwined with collaborative human governance. 

 

Small steps.

 

 

 

 

 

 

 

 

 

 

 

Comments (0)

You don't have permission to comment on this page.