Goals and Meta-Goals

Ever have the experience that you seriously think you're trying to achieve one thing, but then in hindsight, years later, you look back and feel like your past self was actually trying to achieve something else entirely?

I.e., you weren't really after what you thought you were after?

I've had that experience often -- for example, during part of the time I was involved with my first startup company, Webmind Inc. I thought I was chasing two goals: 1) making a lot of money, and 2) getting advanced AI built. In hindsight though, often the actions I was taking weren't balancing these two goals very well. Much of the time I was straightforwardly pursuing AI research -- and fooling myself that the things I was doing were really serving the money goal as well, to a greater extent than was really the case. Of course this was easy to do at that time, because it was the late-1990s dot-com boom, and no one really knew the secret to making money in that period anyway.

My explicit goals with that company were making money and getting advanced AI built, with roughly equal weighting. That's what I thought I was doing, at the time.

My implicit goals, the goals it looks in hindsight like my actions were pursuing, were weighted a bit differently: maybe 70% AI and 30% money. That's what I now think I was doing.

In regard to that company, at that point in time, I had a poorly-aligned goal system.

These concepts have a fundamental importance.

As an external observer, one can look at a system and identify the goals it seems to be pursuing: i.e., in mathematical terms, the functions it seems to be trying to maximize. These are the system's implicit goals.

A system can pursue implicit goals, in this sense, even if it lacks any concept of what a "goal" is.

Some systems, on the other hand, also have explicit goals, meaning that they model themselves as pursuing certain goals.

A well-aligned goal system is a set of explicit goals that fairly accurately reflect the implicit goals of the intelligent system containing the goal system.

Achieving a well-aligned goal system generally requires long practice and deep self-understanding.

Naturally, this is much easier if one's (explicit and implicit) goals are simple ones!

Meta-Goals

Well-alignedness is an example of a meta-goal: a property of a goal system that is not tied to the specific content of the goals, but rather to the general nature of the goal system.

Another meta-goal is consistency.

Consider a goal system as a set of top-level goals, together with other subgoals that are derived from these. Then, a goal system is fundamentally inconsistent to the extent that achieving any one of the top-goals, decreases the level of achievement of any of the other top-level goals. Fundamental consistency is the opposite of this.

There is also a notion of subgoal inconsistency: the extent to which, on average, achieving one the subgoals derived from a particular top-level goal, decreases the level of achievement of the subgoals derived from other top-level goals.

Even if a system is fundamentally consistent, if it's not subgoal consistent, it may have a very hard time achieving its goals.

A related problem is subgoal alienation -- sometimes a subgoal is derived as a way of achieving some other goal or subgoal, but then the former one persists even after the latter one is abandoned. Subgoal alienation leads to the accidental creation of new top-level goals, which leads to inconsistency and poor goal system alignment.

I've known a number of people who originally took high-paid, unrewarding jobs for the reason of saving money up to pursue some other goal. But after a while the other goal became less and less important to them, and the high-paying job became an end in itself. Sometimes this is subgoal alienation; sometimes it's a matter of a poorly-aligned goal system (maybe the high pay was really their main goal all along, and they were just fooling themselves about the other goal).

Sexuality is a massive case of subgoal alienation, on the cultural and biological level. It emerged as a subgoal of reproduction, but -- especially since the advent of birth control -- it has liberated itself from these roots and now serves as a top-level goal itself for most adult humans.

Are Humans Goal-Oriented?

Humans, by nature, are not that thoroughly goal-oriented. We have certain in-built biological goals but we've done well at subverting these. Some of us adopt our own invented or culturally-acquired goals and put a lot of effort into pursuing them. But a lot of our behavior is just plain spontaneous and not terribly goal-oriented.

Of course pretty much any behavior could be modeled as goal-oriented; i.e. every series of actions can be viewed as an attempt to maximize some quantity. But the question is whether this is a simple and natural way to model the behavior in question -- does it pass the Occam's Razor test?

Will Advanced AIs Be Goal-Oriented?

One can imagine artificial minds with a vastly more thorough goal-orientation than humans, and a much greater attention to meta-goals. Most likely such minds will exist one day -- alongside, potentially, artificial minds that are much less goal oriented, much more loosely organized, than humans.

It also seems likely that, even in a mind more thoroughly devoted to goal-achievement than the human mind, a certain percentage of resources will wind up being allocated to spontaneous activity that is not explicitly goal-oriented in any of its details ... because this spontaneous activity may generate creative ideas that will be helpful for achievement of various goals.

Evolution of Top-Level Goals

I've spoken about about goal systems as possessing "top level goals," from which other goals are derived as subgoals.

Are these top-level goals then invariant as a mind evolves and learns?

This is not how humans work, and needs not be how AIs work.

Indeed, if one has a mind that gradually increments its processing power, one might expect that as it grows more intelligent it will discover new potential top-level goals that its earlier versions would not have been able to understand.

How can top-level goals get revised?

Not in a goal-oriented way, because if goal G is put in charge of revising goal G1, this really just means that goal G1 is not top-level -- G is top-level.

Top-level goals can get revised via spontaneous, non-goal-oriented activity -- and the occurrence of this phenomenon in intelligent systems seems to be a fundamental aspect of the growth of the cosmos.

Goals grow, goals drift -- and thus the universe evolves, via both pursuit and spontaneous development of goals.

3 comments:

  1. Hi Very Nice Blog I Have Read Your Post about college admission consulting. It Is Very Informative And Useful Thanks For Posting And Sharing With Us And Your Writting Style Is Very Nice.
    Is your child a high-achieving student, but you’re still worried about the increasingly competitive college admissions process? Then Ivy College Consulting service is the ideal solution for you and unify their achievements and goals to find their one true passion and, ultimately, get them into the school of their choice.

    ReplyDelete