| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Goal-Setting Feedback Loops

This version was saved 4 years, 5 months ago View current version     Page history
Saved by rsb
on October 25, 2019 at 3:20:13 pm
 


Theory

 

Feedback to adjust objectives for relevance

 

We set goals because they help us get things done.  Goal setting is rewarded by human biology on the individual level, and it's has a similar purpose and reward system at the organizational level.

 

However, Stuart Russell has observed that when goals are established without feedback loops to adjust objectives, outcomes result that oppose those the system was established to generate.  He talks about feedback from external sources, but not from internal sources.

 

Sidebar - Our Top-Level Objective:

 

Skip this section if you don't want to think too hard.

 

At some point you will ask what the Top-Level objective is that drives all the lower ones.  The idea of a providing feedback for a Top-Level Objective seems like some kind of weird self-referential paradox.  I mean, you adjust your objective because it doesn't meet your...objective? 

 

From the perspective of evolution, the spoils appear to go to the beings that become smarter, and we are allowed to hope that the smarter being will someday have a more insightful top-level objective.  The top-level objective of an evolving being will change only through internal feedback via changes in a beings intelligence, knowledge of the universe, and other composition.  So: Top-Level objective = continue to evolve into smarter beings.

 

If you don't want to devolve into paradox, I recommend you accept that the top-level objective of a human is dictated by evolutionary forces.  In this way, we can accept all kinds of difficult concepts, like absolute determinism and a view of the universe as a whole being - and just call this that the view of nature or perhaps the Systems-Layer View where we don't need to think about self-awareness or consciousness.  

 

Thinking at the Application-Layer, where we we have self-aware human mind, we are a self-determined being, any objective subservient to our top level objective will be directed by two things - internal feedback from your top level objective, and external feedback from the outside world.  This is how all lower-level objectives will stay in alignment with needs.

 

Application to Organizations

 

That link back there was kind of important - check it out if you haven't already.  As AI researchers seek general purpose rules to establish in systems that must serve intelligent life, they are reflecting on the woeful inability of the basic non-digital systems we have in place today to do just that.  

 

Some thoughtful and creative people implement feedback systems to help humans change their behavior, and this is commonly done on the individual scale

 

In contrast, non-digital organizational structures often do not allow for changing overall objectives (an example would be legal systems that enforce corporate profit maximization)  - or they make it extremely unlikely that objectives will change - as with many federal systems for adjusting constitutions.

 

The largest of organizations, such as the UN, consider modifying their goals infrequently, and process their feedback loops slowly - so that the mechanism of feedback (capturing data, putting it in relevant perspective to intelligent life, understanding the relevance to future outcomes, and taking action) can take years.  There is some intent involved to that, as a slow change of objectives makes it easier to build long-term plans - and bigger organizations are often needed for larger, longer-term projects.   

 

But the slow speed of feedback mechanisms also reduces the value of the feedback process as the rate of change in the environment increases.  Thus, in general, the outcomes of organizations cease to fit needs as the scale of organizational bodies increase and the rate of change in the environment increases.  


In some cases this can be handled via regulation - keeping organizations small, missions flexible, and structure to the minimum required to organize around objectives that have a limited lifespan.  But this has proven to be challenging.  A bit of a side story about  that - and to give credit where credit is due - the founders of the united states tried to outlaw profit-driven corporations from the first days of their countries founding .  They had seen enough to understand that unregulated corporations running amok was a recipe for revolution. 

 

Greed quickly brought profit-driven corporations back to the US, put them in charge of the country, and probably shortened human existence considerably.  This entire discussion might even be moot at this point from a long-term perspective - we may not be the species that is smart enough to understand itself and survive to become multi-planetary.  

 

Regulation didn't work because the long-term regulatory system was corrupted by short-term individual greed.  The type of regulation didn't matter - constraints on size, scope, and lifetime of the organization all fail when the external regulatory system is corrupted.

 

Two Laws of Organizations

 

This brings me to a theory of two laws of organizations that support intelligent life (we have to ignore the whole host of missing steps and problems with these for now):

 

1) Organizations that support intelligent life should consider their objectives as variables that change real-time based on their understanding of natural conditions.

 

2) Organizations that support intelligent life should stop work if they become corrupted and no longer have sovereign control.

 

An organization without ANY objective is not something that I am suggesting.  I am suggesting that both digital and non-digital organizations be treated more like artificial intelligence. 

 

This suggestion is an odd form of self-awareness - in a very specific sense, an organization that can pivot or stop acting if its objectives no longer serve the needs of intelligent life could be considered more "aware" than one that cannot.

 

You don't have to dig too hard to find out that Hans Moravec thought of this first.  From his 1993 paper - The Age of Robotics

 

"Humans can buy enormous safety by mandating an elaborate analog of Isaac Asimov's three "Laws of Robotics" in this corporate character--perhaps the entire body of corporate law, with human rights and anti-trust provisions, and appropriate relative weightings to resolve conflicts. Robot corporations so constituted will have no desire to cheat, though they may sometimes find creative interpretations of the laws--which will consequently require a period of tuning to insure their intended spirit."

 

Hans Moravec also speculated that organizations have to have some objectives that change slowly and some objectives that change quickly, analogous to a constitution and a set of enforcement laws.  I don't know if I'm down with that or not.

 

Stuart Russel has most certainly read that one.  And, by the way, that Moravec paper is appropriately wild futurism that comes highly recommended.

 

Experiments to run

 

Decentralized Autonomous Organizations 

 

The experiment to run that would be the most fun would be to add objectives, and a feedback loop for modification of objectives, and a corruption shut-off algorithm to digital organizations.  Unfortunately, in 2019 digital organizations are so primitive and buggy as to make that an exercise in frustration.  Looking forward to experimenting when those systems are more mature in 2020 or later.  

 

What would be *really* fun, but that we haven't even talked about: defining the objectives of a DAO to be to bring more intelligent beings than ourselves into existence.  In general, we bring beings of the same intelligence into existence, although the reward for intelligence is rapidly trending upward.  That is the most interesting experiment to run in simulation, but the most dangerous as well.  

 

Classical organizations

 

The experiment that is most practical to run is to establish a new legal organization in a country I am able to establish one in, and build a legal Operating Agreement (OA) that has rules for:

 

1) Operating in the service of an objective stated in an external, digital system, and changing that objective via a feedback loop that takes as input validation of external measurements of the effectiveness of The Objective Itself.  

 

2) Establishing a corruption checksum mechanism - putting all organizational activity on hold if the organization is deemed to be corrupted - based on a similar external input system to that of rule 1.

 

These kinds of rules are what responsible business people are talking about (this is a neat interview with Bennioff), at a much higher level, and they are setting charitable objectives that are often very hard to quantify the value of.  I think this type of experiment in understanding the game-theoretic outcomes of changes in corporate and governmental rulesets, is a good warmup to logically prove out some theories before digital organizations are ready to encode them.   It is something fun to run on paper at first, legally requiring the board to refer to digital objectives outside of the OA. 

 

This is the kind of optimal-balance with feedback-loop experiment that lawyers and legislators do all the time.  More can be learned from the vast history of law and governance than can be from running simulations of digital organizations.  Ideally, to run this experiment one would want to work with a legal scholar or university team, and various political theorists.

 

I could see building automation into the paper-experiment with digital oracles that notify me of broken assumptions that underly the current objective.  Digital notifications could be sent via email and listed in a reporting system to warn the organization that the parameters of the original objectives had been exceeded. 

 

In this way, I could slowly begin to digitize the process of objective setting, and reason about how automation could be intertwined with collaborative human governance. 

 

Small steps.

 

 

 

 

 

 

 

 

 

 

 

Comments (0)

You don't have permission to comment on this page.