Best News Views and Images Website Room

Thursday 28 February 2008

Facing the Quasi-Autonomous Robot Monsters under the bed

What If My Toaster Burns My Bagels Because It Hates Me?

Given the subjects I tend to focus on in my writing, I’m often asked a lot of questions regarding issues of autonomy, will, cognition, perception, robots, and personhood. These questions tend to be filled with fuzzy, difficult-to-define terms, and what’s more, they’re commonly asked by people with a clear agenda (whether it be making a case for the existence of “souls” and supernatural superbeings, or asserting that nothing matters because no choice is actually real because individuals aren’t really real). And, like the proverbial monsters under the bed, they sometimes keep me up at night trying to hash through all the various contingencies and semantic gymnastics even beginning to address them would require.

But at a certain point, thinking about the monsters (quasi-autonomous robot sort or otherwise) that might be under the bed (and how to avoid them) starts to become more exhausting and annoying than just switching on the light and proving that either there aren’t any monsters at all, or just getting to know them a little better if there are. So while this writing will by no means address these questions with “airtight” answers, it should at least give a sense of what goes on in my head when I approach them.

Decisions and Autonomy

Most dictionaries (see here for an example of this) seem to define “autonomy” in terms like independence, freedom, self-direction, and self-governance. I don’t have any argument with the dictionary in that regard, however, in this discussion, the “autonomy” I have most in mind is that which describes a discrete and independently-operating locus of consciousness, awareness, and thought.

In this sense, humans and cats and even mice can be said to be “autonomous”. Every human, cat, and mouse has some level of wholly private experience that no other entity can directly access. That is the usual sense in which I think about autonomy. There are, of course, other complicating layers and definitions on top of that one—some involving decision-making ability and legal sorts of independence from external coercion and control—but the basic unit of “autonomy” for me is the individual mind.

As far as what it means for an entity to be capable of making decisions for itself, that’s another question entirely. It’s also a question that depends on what you’d consider a “decision” to be, and whether you automatically require explanations of agency in addressing that concept. I would definitely allow that in some respects, entities (we can assume to be) wholly lacking in minds are, in fact, capable of “making decisions”.

Say you write a function in C that will output one string if a variable has a value of less than five and a different string if a variable has a value of five or more. Many of us would, at least colloquially, say that the function decides which string to output based on what its input value is. And if this function existed in the context of a program where any of multiple functions might be called in response to higher-level inputs, we might say that the program decides which functions it is going to call.

But if we dig a little deeper, it’s clear that the colloquial conventions in which program behavior is commonly discussed do not reflect the (arguably) “ultimate” sources of the program’s behavior. The software engineer writing the program in fact decides in advance what the program’s outputs are going to be. She decides what inputs will prompt which functions to be called, and she also decides what criteria will determine the outputs of each function.

But say we dig even deeper than that! Say the software engineer is writing her program according to a set of requirements handed to her by her boss. Her boss may not even know the C language himself; all he knows is what he wants the program to do. So he provides “inputs” to the software engineer in the form of requirements, which are probably written in “natural language” as opposed to code or pseudocode. The engineer then takes the requirements and processes them into a C program.

But let’s not stop there! Say the boss got the requirements from the customer over the phone. Furthermore, say that the customer speaks only Chinese, whereas the boss speaks both Chinese and English. The boss, in this case, has to translate the customer’s requirements into English so that the software engineer (who doesn’t know a word of Chinese) can understand them. This entails the boss having to make a lot of decisions regarding what the customer likely meant by certain turns of phrase, and it also entails the boss having to think about what points to emphasize most strongly so that the engineer gets a sense of the customer’s priorities.

Now, lest anyone think I’m veering into the realm of cybernetic totalism, let’s pause a moment. While one could indeed condense the software engineer herself into a code-producing box into which you put requirements and out of which you get a software package, and while one could reduce the boss into a Chinese-English requirements translation machine, surely it is apparent that there are discontinuities in this instructional chain. That is, as you move further away from any individual program output (let’s say, a string printed on the screen that reads either “X is less than five” or “X is greater than or equal to five"), things become less and less easy to determine, and far more subject to uncontrollable variables.

When writing a C program, the kinds of inputs the computer gets are constrained to a relatively narrow set—the programmer generally uses a keyboard interface to type the code in, and very specific rules of syntax must be followed in order for the program to do anything at all when it is compiled (much less the precise thing the programmer wants it to do). What’s more, as far as any of us knows, present-day desktop PCs don’t have anything like the “internal life” that we humans do, or like chimps or cats or mice do, even. Neither the computer nor the individual program being written is “autonomous” in the basic sense of having a private vista of self-aware reflection embedded in a larger reality—notions of the computer and program “making decisions” are found only in a kind of linguistic folklore rather than in literal points of fact.

Certainly one might suggest that “random quantum events” or short circuits or power surges might result in the program being written behaving differently than the programmer specifies it to behave, but essentially, the program’s outputs are severely and rigorously constrained by the programmer’s textual inputs.

The programmer, on the other hand, is autonomous in the sense defined at the beginning of this writing. She has a mind that nobody else can experience the way she experiences it. She can have thoughts that aren’t detectable by any other people or by any measuring instruments. And she can, in the most general folk sense of the word “choice” (the one that ignores the vast and convoluted and seemingly never-ending Free Will Debate), choose:

(a) whether or not to write the program at all

(b) whether or not to come to work in the morning

(c) whether to keep this job or seek another

(d) whether to make one function perform a particular task (or split the task across two functions, one of which calls the other)

...or any number of other options that will affect the nature of the program, up to and including whether or not it comes into being at all.

Similarly, the engineer’s boss can make a whole slew of decisions (from the vantage point of his autonomous perspective) that will also affect the fate of the program, albeit not as directly and obviously as the decisions made by the engineer will. He can, like the programmer, make decisions that result in the program not being written at all—e.g., he might decide not to give her the requirements because he wants her to focus on a different task for the rest of the afternoon.

He might decide to second-guess something the customer said on the basis of a perception that he knows slightly more about programming than the customer does (whether or not this is a wise move is beside the point for now). Etc. And by extension, the customer can choose to describe the requirements in one way rather than another based on how important he or she believes this particular project to be—e.g., s/he might be more thorough and concerned about making sure the engineer’s boss really understands the requirements, or s/he might just rattle off the requirements vaguely and carelessly due to feeling that the project is inadequately funded to begin with.

So far, I’ve described the program’s pseudo-decision-making process—e.g., the fact that the program branches at certain points, but not due to any kind of internally conscious self-reflection on the program’s or the PC’s part. I’ve also described the “volitional-feeling” choices made by the engineer, her boss, and the customer.

But there are other factors that can indirectly affect the program as well that come from the human agents in the instructional chain without necessarily “feeling like” choices.

For instance, if the engineer is tired or hungry, she might not consciously decide to make the program sloppier and less modular, but it might come out that way anyway because she’s not performing at her best. Similarly, if the engineer is well-rested and cheerfully sipping away at her Mountain Dew (provided generously by the company), the program might come out in a much slicker and more efficient form—again, without any conscious feeling on the engineer’s part that she’s choosing for the program to come out that way as a result of sentient and deliberate decision-making.

And if the boss is distracted by other projects when he’s taking the customer call, he might inadvertently write down the requirements sloppily. He might make typographical errors by mistake. He might hear the customer say a Chinese phrase he doesn’t recognize, at which point he’ll look it up in his Chinese-English dictionary, and in doing so discover that there’s another phrase he got wrong earlier in the conversation.

In any case, the program is going to be affected by things the boss does and various inputs he might receive and consider on a non-volitional-feeling level. The same goes for the customer—their instructions might seem to say one thing rather than another based on whether or not the customer has a scratchy throat, or based on background noise in the customer’s or boss’s office. And so on, and so forth.

Will, Free And Otherwise

Someone once asserted in response to something I wrote: It seems to me that if everything is contingent upon determining material processes, then everything is determined and true decisions don’t exist.

Here we encounter the can of worms that is the Free Will Debate. What is a “true decision” seems quite subjective, and I certainly cannot hope to put forth a definition of “true decision” that everyone will necessarily relate to or agree with. The best I can do is describe what things seem most like “true decisions” based on my own interpretation of what it means to make a decision as a conscious, autonomous entity.

In my example above involving the programmer, I’d be plenty satisfied to classify the “volitional-feeling” choices made by the engineer, the boss, and the customer as about as close to “true decisions” as humans can assert the existence of. Certainly, one can try and claim that everything about that person’s life up to the moment they made the decision about this program actually “determined” its final state, and that there was nothing truly “volitional” about their decision, but one cannot deny that in everyday life, things we do on purpose feel qualitatively different than things we don’t do on purpose.

Usually, that is. People can, after all, be coerced (by other people, by physiological inputs registered subconsciously, etc.), and in some cases people might “feel like” they are acting volitionally even when they’re mainly responding to deep, low-level impulses like fear and reward. But at the same time, people are also capable of emerging from coercion and being able to look back and identify when they were actually being coerced (or compelled) and when they weren’t.

In light of all that, even if you’re a “hard determinist” in the “we’re all just objects going through the unconsciously-programmed motions that could have been extrapolated at the moment of the Big Bang if only someone had had a big enough computer, and nobody really makes any kind of meaningful choices at all because of this” sense (I’m not one of those, by the way), I don’t see why you’d want to ignore the many and various levels of “feelings of volition” and emergence from/descent into coercion that humans and presumably other entities seem to experience.

Clearly, there’s something interesting going on in the brain across all these experiences. And there are plenty of philosophical and ethical implications here: personally, I think that an “ideal” ethical state with regard to personal autonomy is the one in which coercion is minimized, and in which the individual is has access to whatever information she might need to make maximally-informed decisions.

Tools and Toys, Bodies and Minds

Tools are a particular class of objects not normally considered autonomous individually, but which are used by agency-possessing individuals in the fulfillment of particular goals. While tools can certainly be anthropomorphized (I know of several people that have named their cars, and most people who use computers regularly can’t seem to help but project humanlike emotional maps onto their machines, particularly when said machines seem “cranky").

Still, thinking of tools as a particular class of objects that can serve as extensions of self (or extensions of will, perhaps)—is very useful, particularly when viewing the “person” as embedded in and part of the environment, as opposed to somehow distant from it.

My notion of personhood, or at least one formulation of it, can be stated thusly: I am a small piece of the universe observing itself.

If I had to sculpt a geometric model of reality (a daunting task if there ever was one!), one possible model might resemble a big rubber sheet pulled to tiny points in some areas, stretched thin in others, pushed to a smooth roundness in still others, etc.

Basically, while parts of the sheet would certainly have their own identities and local characteristics, and while each part would consequently be an entity in its own right, all parts and the interconnections between them would still comprise a larger entity.

Sticking with that model for now, let’s say a person is initially represented by a point on the sheet pulled sharply upward. As this person grows, develops, learns, and interacts with the other local surface irregularities, relationships will be established with those irregularities. Depending on the type and nature of each irregularity, the relationship between it and the person will effectively change the shape of the person in some way. Some irregularities might make the person-representing point poke out further from the plane of the sheet, whereas others might smooth it out and draw it closer. Yet all the while, the person maintains a sense of continuity, and certain aspects of his trajectory through time will always show the influence of his initial conditions.

And just as the sheet itself provides fertile ground for a tremendous diversity of individual forms, each person-point is simultaneously capable of evolving in any of a fantastic array of directions and of maintaining a distinct sense of continuous personhood.

Additionally, every person, generally speaking, sees “ownership” and control of his or her body as a precious and deeply-held right. Given the manner in which tools are employed as extensions of will, they are also in many respects extensions of the body—and most people would be hard-pressed to truly define where “they” end and where their tools begin. It’s rather strange to think about it in this way, but honestly, I would feel as if I’d undergone some sort of amputation if my computer’s hard drive were suddenly and irrevocably wiped!

But if tools are a special class of object, do they differ from “machines” in general? If they can be considered parts of beings, and subject to the decision-making processes of those beings, what does this in turn suggest about the nature of object-boundaries and agency?

Invoking the “sheet model” again, perhaps tools would represent those irregularities that can be effectively “absorbed” by the person-points to the point of becoming part of them. Similarly, tools can also be discarded and/or removed when the person no longer finds them useful, or when they begin to pose some problem. The “body” over time cannot be said to be a static clod of matter—rather, the body is a dynamic process that winds its way through spacetime, memory and sensation incrementally bridging the piecewise generations of cellular turnover. In some respects, cells and eyeglasses and hair and prosthetic limbs and tattoos and iPods and lungs are all of the same ilk: things that individually are not persons, but that can be aspects of persons that in turn define those persons—at least on a moment to moment basis.

Did I Say Overlords? I Meant Protectors...

My earliest concept of what a “robot” was came, unsurprisingly, from science fiction. I basically saw robots as “metal people”, and that’s often how they were presented on-screen. It didn’t even occur to me as a child to question whether or not “robots” had consciousness or agency (but then again, I also tended to see pretty much everything as “potentially alive”, so that isn’t too surprising). I also had some robot-themed toys growing up; one of them was an educational machine called Alphie II, and I had a number of robotic Star Wars action figures. My brother also had a really neat little gizmo labeled “Robot Factory” that consisted of one large robot with a built-in mechanism that sent several tiny robots on an endless roller-coaster ride along a track that snaked around its body. So basically, I can’t remember ever not being around what I’d term the “robot phenotype”.

But I didn’t learn about “real robots” until I was quite a bit older, and honestly, I was rather surprised at how “primitive” they seemed, as well as at how they were used. I think the first “real robot” I saw was on a TV show about automobile manufacturing (or something along those lines), and it just looked like a multi-jointed yellow mechanical arm-thing that moved according to the motives of whoever had programmed it to build cars.

So basically, every robot I’ve ever made the acquaintance of in real life has been either an industrial robot, a toy, or an experimental “kit” bot equipped with a few sensors and/or actuators. And even the more impressive robots I’ve heard of (such as the DARPA Grand Challenge cars) haven’t been autonomous in the sense that humans, many animals, and fictional robots (like R2D2) are—at best, they can do one thing quite well, but they aren’t capable of deciding they’d rather do something else, and it seems to me unlikely that they’ve experienced existential despair over this fact.

Clearly, robots are commonplace today—just not autonomous robots. And yet, there seems to be a kind of background assumption that not only would autonomous robots be desirable in some contexts, but autonomous robots would somehow represent a more “advanced” robot in some significant way. But would humans actually want to build truly autonomous machines?

Humans tend strongly to use technology prosthetically—that is, as the collective pool of knowledge about How Stuff Works (and How To Make Stuff Do Other Stuff) grows over time and is communicated more effectively to more and more people, the trend has been toward applications that allow people to assert their ideas, desires, and will over a greater distance, or with greater strength, or with greater precision, than was feasible before the adoption of the application. The trend has not (at least from what I’ve observed) been toward trying to—forgive the terminology—“ensoul” machines, except perhaps in the context of university lab projects, none of which have exactly panned out in that direction so far.

The world is already pretty well populated by autonomous agents (animals), and half the time it seems like humans are more concerned with trying to decrease the autonomy of these agents than with increasing it. Hence, the idea of large groups of humans deciding to create autonomous robots and “release them into the wild” for the sake of allowing new life to flourish seems a mite farfetched.

Plus, there’s the ethical problem with creating an autonomous entity in a lab—as far as I’m concerned, once you’ve established that an entity is autonomous, you have no right to keep it confined (in a lab or otherwise), nor is it acceptable to subject it to non-consensual or coerced experimentation.

This fact alone makes it seem unlikely to me that truly autonomous robots are going to be a major human goal anytime in the foreseeable future—right now, robots outside the movies are pretty much thought of as being “tools” (extensions of human will), and people don’t want their tools to talk back or say “No!”.

Progress, Rights, and Personhood

Part of what is meant by some uses of the word “progress” is a kind of ongoing emancipatory process that involves seeking to recognize more and varied forms of personhood, to develop and provide tools that assist with individual flourishing, and to ensure that new technological developments (or proposed developments) benefit more than a few privileged folks.

So while I certainly enjoy talking and thinking about robots, and while I would be overjoyed to someday wander through bright jungles populated by colorful mechanical fauna who have been set free to flourish as beings in their own right (rather than as means to some “end"), I think it’s important to stay grounded in the present when considering what actions would likely lead to the greatest progress in the sense described above.

“Real” autonomous robots would, after all, be non-tools—and non-tools (people, other autonomous entities, etc.) cannot be used, absorbed, and/or discarded by others in the sense that tools can. One reason I find myself intrigued by “roboethics” discussions these days is actually tied into the very real civil rights struggles faced by already-existing persons. And again with the disclaimer that this is a science fiction scenario, I can’t help but wonder whether humans are at the point of being able to recognize very atypical persons (such as sentient robots would be) as non-tools. My guess is “not quite”, and I see a potential (if not exactly immanent) danger of people creating entities that are autonomous and sentient, but that are not acknowledged as such. It’s not as if there isn’t a precedent for this.

Some of the worst abuses in history have been perpetuated as a result of people trying to use, absorb, and ignore or deny the personhood and autonomy of other people. Ethnic minorities, women, children, disabled persons, and individuals of any configuration in positions of disadvantage for whatever reason have all had to deal with being treated like tools (in the sense of being considered non-autonomous, and only worth what they can “produce”, whether it be slave labor, sons to carry on the family lineage, or in the case of disabled persons, “proof” of full personhood in the first place).

And this isn’t something we’re exactly past as a species yet. Regardless of the general sense I still have that all things in reality have a kind of “character” to them, I’m well aware that some things are tools, and that people are not tools, though tools can be extensions of people. Robots, perhaps, are interesting because they stand in a strange area where they have the potential to be considered either non-autonomous things or people (or both, context permitting!), depending on what direction the research goes in.

And given this, I think that anyone who finds himself or herself obsessing over “robot rights” would do very well to learn a bit more about general civil rights. Not only is a much greater consciousness of civil rights gravely needed in the present, but it is going to be vital to broaden the common concept of what a full person is if anyone really wants to see the kind of wide-ranging prosthetically-enabled vibrant diversity that may at least become physically feasible within the lifetimes of many alive today.

No comments: