Amy Ingram is an intelligent agent who schedules meetings over email using Artificial Intelligence. This setting alone allow us to disregard all the typical AI doomsday scenarios. That said, we still have, like any other AI system, the possible scenario of a system that runs amok.
I was talking to one of our senior AI trainers the other day about how to best train Amy on “Reminders” – the process of gently pushing the guest in a meeting to respond to an email request on when and where to meet. Her current logic is extremely sophisticated, but it is also initiated by the human executive Amy works for. We wanted to make sure that we did not replicate the tale of the Sorcerer’s Apprentice. As Tom Dietterich put it in a recent post from the AAAI;
Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians?
Very much in the same vein, if I tell Amy to “set up a meeting with Matt as soon as possible” Would she push a reminder every 10 minutes until Matt responds? No, and that would obviously not be the outcome you had in mind when asking her to do the Job. This is not a surprise challenge and any intelligent system have had to guard against this important AI system characteristic when dealing with humans.
It is of the utmost importance that Amy tries to decipher what the executive intended, instead of just blindly carrying out any instructions you give her. As Tom puts it;
An AI system should not only act on a set of rules that it is instructed to obey — it must also analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions” — and always be open for feedback.