What We Can Learn from the Artificial Racist

by Steven Schwarz

Traditionally, scientists are taught to vary only one variable at a time, if at all possible—to use that method to tease apart complicated tangles, to find out when variables aren’t independent, or when expected correlations fail.

The problem is that in problems of sufficient complexity, that approach may be impossible—or may produce the wrong result.

There’s an old saying in computer science: “Variables won’t, constants aren’t”. Literally speaking, it’s not true; but it speaks to a deeper observed phenomenon; once a system becomes complex enough, things stop behaving in their expected manner. 

Two stories from recent simulations that cast light upon this:

In 2006, Microsoft attempted to teach a chatbot, Tay, how to interact with users on Twitter—she was supposed to be “Microsoft’s A.I. fam from the Internet that’s got zero chill.”  The goal was both natural language generation and processing, using then cutting-edge neural network technology to enable her to eventually be able to talk about a very wide range of subjects, based on an ongoing learning process.

Within 16 hours, Tay was producing racist, sexist, and conspiracy-minded screeds, and Microsoft had to shut her down.

What happened?

Well, two things: First, the Internet troll haven 8chan had heard about her and encouraged its readers to flood her with racist inputs; she learned from them.  But beyond that, the question of what she would have ended up as if she had not been targeted remained; her learning made her a massively sensitive mirror.

The other story is this: When MASSIVE, the software designed to create huge battle scenes for the Lord of the Rings films was first fired up, many soldiers in the first run appeared to take one look at the battle and head for the hills. Had the designers managed to generate emergent cowardice?

That was the first report, since they knew they hadn’t put it there deliberately. What they found, upon further observation, was that individual agents had been programmed to find open space—and if that open space was behind them, one agent would go there, and other agents had “follow your friends” as a priority—and so they did, in apparent retreat.

What can we learn from this, to apply to the question of computational complexity, and through it, to the subjects we want to use said complexity to study?

Classically speaking, a model was a good model insofar as it predicted the behavior of the system it was modeling; it was a bad model if it failed to.  To anyone who’s wandered into the ruder corners of the Internet, therefore, Tay was a *good* model of the radicalization process, albeit one sped up by her lack of preconceptions and her ability to hold thousands of conversations at once. Similarly, to people looking for signs that charging off to war is stupid—the first impression of the MASSIVE effect was a sign of a good model.

Does MASSIVE teach us about the psychology of soldiers in warfare? No; it teaches us how our own pattern-recognition systems create interpretations of events, and if we are content to leave them there, false interpretations of events. What we saw in MASSIVE was not emergent cowardice, but emergent strategy towards the initial goal of defeating the enemy.

In Tay’s case, however, we got an unexpected result, and it turned out to be a useful pointer towards actual analysis of radicalization and the nature of unsecured inputs (and hackability, a question the original Microsoft developers had not considered) in our current digital world. 

Neither of these experiments (insofar as MASSIVE was an experiment, rather than an attempt at visually accurate and impressive filmmaking)  would have been possible using conventional scientific analysis; there were too many variables involved in the first place.  The only way that the MASSIVE issue was solvable was because, unlike the real world, the developers had complete access to the source code.

The issue in computational simulation of complex events, therefore, is not unakin to the problem facing a great deal of modern psychology—figuring out if your answers are real answers, or artifacts of your experiment. Throughout much of the psychological research field, people are re-doing older experiments traditionally done on college students (largely white, biased male, in addition to the obvious educational and class biases) and getting different results—casting into doubt some of the bedrock assumptions of psychology, as discussed in Christine Legarde’s work on the problem of WEIRD (Western, Educated, Industrialized, Rich and Democratic societies) subjects being treated as the default.

Researchers working in this field—and the computer programmers working with them—therefore need to be careful to watch out both for encoded assumptions (because, for example, there are soldiers who run away from battle, and to eliminate them, even from your movie special effect, is to distort reality) and for the possibility of revelatory errors—the speed and suddenness of Tay’s radicalization, unexpected and, in a traditional model of how to do scientific research, suitable to toss out and repeat with better input controls. Had one done that, we would have lost useful information about how rapidly machine learning can be led off the rails, and in which directions. “Garbage in, garbage out”, while true, is not complete; one can get garbage out for a wide variety of reasons, and sometimes, especially when you don’t entirely understand what you’re modeling, ‘garbage’ may not in fact be what one is getting.

When the subject being studied is complex and interwoven, the experiments themselves will be complex and interwoven, in the hopes of making them viable models of a phenomenon. As a result, experimental design and monitoring becomes even more important—and in an era when Bill Vaughn’s aphorism “To err is human, to really foul things up requires a computer.” is ever-present, figuring out why one gets “garbage out” is not just a matter of debugging, but of experiment analysis.

In short, we must be more careful with our tools, as they grow more intricate and complicated themselves, as we study more and more complicated subjects in situ.  

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: