| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Goal-Setting Feedback Loops

Page history last edited by rsb 2 years, 8 months ago


Theory

 

Feedback to adjust objectives for relevance

 

We set goals because they help us get things done.  Goal setting is rewarded by human biology on the individual level, and it's has a similar purpose and reward system at the organizational level.

 

However, Stuart Russell has notes that when goals are established without feedback loops to adjust objectives, as the environment changes, outcomes result that oppose those the system was established to generate.  He talks about feedback from external sources, but not from internal sources.

 

Sidebar - Our Top-Level Objective:

 

Skip this unless you want to get into some philosophical stuff.

 

At some point you may ask what your Top-Level objective is - the one that drives all the lower ones.  The idea of a providing feedback for a Top-Level Objective seems like some kind of weird self-referential paradox.  I mean, you adjust your top-level objective because, based on feedback, it doesn't meet your top-level objective?

 

The Systems Layer:

 

You just ran headlong into reality.  Life, as we know it, isn't driven by pithy statements of your objectives - it is driven by external input to a complex weighted network of neurons and probably some other stuff we don't understand, yet.

 

If you don't want to devolve into paradox or metaphysics, I recommend you accept that the top-level objective of a human is dictated by evolutionary forces.  You might have a top level objective, but you didn't set on your own. 

 

In this way, we can accept all kinds of difficult concepts, like absolute determinism and a view of the universe as a whole being - and just call this the Systems-Layer View where we don't need to think about self-awareness or consciousness.  Our top-level objective is just whatever evolution set it to - and we can't control it.


From the perspective of evolution, the spoils ultimately appear to go to the beings that become smarter.  Creating smarter beings seems to be the top-level objective given to us by evolutionary competition.  It doesn't even matter if it's more complicated than that - what matters is that we accept that we didn't set it, and we don't control it.  The top-level objective of an evolving being will change only through internal feedback via changes in a beings intelligence, knowledge of the universe, and other composition. 

 

For now: Top-Level objective = continue to evolve into smarter beings. 

 

The Application Layer:

 

From the perspective of a self-aware human mind, we are a self-determined being.   Any objective subservient to our top level objective will be directed by two things - internal feedback from your top level objective, and external feedback from the outside world.  This is how all lower-level objectives stay in alignment with our ultimate needs.

 

In a conscious mind, the Application layer self can ignore the fact that that ALL feedback is really external - that is - our top-level objective is not our own.  

 

Application to Organizations

 

That link back there (stuart russel on control) was kind of important - check it out if you haven't already.  As AI researchers seek general purpose rules to establish in systems that must serve intelligent life, they are reflecting on the woeful inability of the basic non-digital systems we have in place today to do just that.  

 

Some thoughtful and creative people implement feedback systems to help humans change their behavior, and this is commonly done on the individual scale

 

In contrast, non-digital organizational structures often do not allow for changing overall objectives (an example would be legal systems that enforce corporate profit maximization)  - or they make it extremely unlikely that objectives will change - as with many federal systems for adjusting constitutions.

 

The largest of organizations, such as the UN, consider modifying their goals infrequently, and process their feedback loops slowly - so that the mechanism of feedback (capturing data, putting it in relevant perspective to intelligent life, understanding the relevance to future outcomes, and taking action) can take years.  That is largely intentional, as a slow change of objectives makes it easier to build long-term plans - and bigger organizations are often needed for larger, longer-term projects.   

 

But the slow speed of feedback mechanisms also reduces the value of the feedback process as the rate of change in the environment increases.  Thus, in general, the outcomes of organizations cease to fit needs as the scale of organizational bodies increase and the rate of change in the environment increases.  


In some cases this can be handled via regulation - keeping organizations small, missions flexible, and structure to the minimum required to organize around objectives that have a limited lifespan.  But this has proven to be challenging.  

 

A bit of a side story about  that - and to give credit where credit is due - the founders of the united states tried to outlaw profit-driven corporations from the first days of their countries founding .  They had seen enough to understand that unregulated corporations of unlimited size running amok was a recipe for revolution.  Greed quickly brought profit-driven corporations back to the US, put them in charge of the country, and probably shortened human existence considerably.  

 

Regulation didn't work because the long-term regulatory system was corrupted by short-term individual greed.  The type of regulation didn't matter - constraints on size, scope, and lifetime of the organization all fall prey to corruption of higher-level regulatory systems.

 

Two Laws of Organizations

 

This brings me to a theory of two laws of organizations that support intelligent life (we have to ignore the whole host of missing steps and problems with these for now):

 

1) Organizations that support intelligent life should consider their objectives as variables that change real-time based on their understanding of natural conditions.

 

2) Organizations that support intelligent life should stop work if they become corrupted and no longer have sovereign control.

 

An organization without ANY objective is not something that I am suggesting.  I am suggesting that both digital and non-digital organizations be treated more like artificial intelligence. 

 

This suggestion is an odd form of self-awareness - in a very specific sense, an organization that can pivot or stop acting if its objectives no longer serve the needs of intelligent life could be considered more "aware" than one that cannot.

 

You don't have to dig too hard to find out that Hans Moravec thought of this first.  From his 1993 paper - The Age of Robotics

 

"Humans can buy enormous safety by mandating an elaborate analog of Isaac Asimov's three "Laws of Robotics" in this corporate character--perhaps the entire body of corporate law, with human rights and anti-trust provisions, and appropriate relative weightings to resolve conflicts. Robot corporations so constituted will have no desire to cheat, though they may sometimes find creative interpretations of the laws--which will consequently require a period of tuning to insure their intended spirit."

 

Hans Moravec also speculated that organizations have to have some objectives that change slowly and some objectives that change quickly, analogous to a constitution and a set of enforcement laws.  I don't know if I'm down with that or not.

 

Stuart Russel has most certainly read that one.  And, by the way, that Moravec paper is appropriately wild futurism that comes highly recommended.

 

Experiments to run

 

Decentralized Autonomous Organizations 

 

In DAOs, which are easier for me to stomach if I call them digital organizations, at least some of the decision-making and governance logic is encoded in software.  Usually digital organizations are hybrids of simple deterministic computer-based vote tallying machines and digital contracts that reward certain human behaviors.

 

The experiment to run that would be the most fun would be to add to digital organizations: objectives, and a feedback loop for modification of objectives, and a corruption shut-off algorithm.  Unfortunately, in 2019 digital organizations are so primitive and buggy as to make that an exercise in frustration.  I am looking forward to experimenting when those systems are more mature in 2020 or later.  

 

What would be *really* fun, but that we haven't even talked about: defining the objectives of a DAO to be to bring more intelligent beings than ourselves into existence.  That is the most interesting experiment to run in simulation, but possibly the most dangerous as well.  In general, we bring beings of the same intelligence into existence.   However, the reward for intelligence is rapidly trending upward, so I expect this experiment is going on right now.  

 

Classical organizations

 

The experiment that is most practical to run is to establish a new legal organization in a country I am able to establish one in, and build a legal Operating Agreement (OA) that has rules for:

 

1) Operating in the service of an objective stated in an external, digital system, and changing that objective via a feedback loop that takes, as input validation, external measurements of the effectiveness of The Objective Itself.  

 

2) Establishing a corruption checksum mechanism - putting all organizational activity on hold if the organization is deemed to be corrupted - based on a similar external input system to that of rule 1.

 

These kinds of rules are what responsible business people are talking about (this is a neat interview with Bennioff), at a much higher level, and they are setting charitable objectives that are often very hard to quantify the value of.  I think this type of experiment in understanding the game-theoretic outcomes of changes in corporate and governmental rulesets, is a good warmup to logically prove out some theories before digital organizations are ready to encode them.   It is something fun to run on paper at first, legally requiring the board to refer to digital objectives outside of the OA. 

 

This is the kind of "find the optimal-balance with a feedback-loop" experiment that lawyers and legislators do all the time.  More can be learned from the vast history of law and governance than can be from running simulations of digital organizations.  Ideally, to run this experiment, one would want to work with a legal scholar or university team, and various political theorists.

 

I could see building automation into the paper-experiment with digital oracles that notify me of broken assumptions that underly the current objective.  Digital notifications could be sent via email and listed in a reporting system to warn the organization that the parameters of the original objectives had been exceeded. 

 

In this way, I could slowly begin to digitize the process of objective setting, and reason about how automation could be intertwined with collaborative human governance. 

 

Small steps.

 

 

 

 

 

 

 

 

 

 

 

Comments (0)

You don't have permission to comment on this page.