Shining light on the invisible

In October I joined the team as the first UX/UI designer for our ‘visible’ touchpoints, working alongside another designer who focuses on the ‘invisible’ interactions of our product. I became a part of a talented team of data scientists and engineers in the pursuit of making an intelligent assistant who magically schedules meetings for you.

In my first weeks of getting to know my new team, our CEO Dennis Mortensen invited me to partake in one of his famous walking meetings to get acquainted. On this walk, Dennis and I discussed the design philosophy which would set the tone for the work I’d be doing at Luckily, I was fueled by two cups of coffee, as the clip of the walk was rapid, and the amount of information discussed was dense. It takes energy to keep up with Dennis.

As we started our loop around the Financial District in Manhattan, he outlined the new paradigm he believes we’re entering: the age of “invisible software.” Although the industry hasn’t agreed upon any vocabulary for products and services that lack a recognizable interface, Dennis explained that “invisible” was the best working title he had found.

With all of the big players, and an ever growing number of startups like ours already invested in the AI space, users are getting a feel for what it’s like to interact with products that have no inputs, buttons, flashing alerts, or other interface elements we’ve become accustomed to. Think Amazon Echo or Google Home. Despite this growth, Dennis expressed that he felt the industry wasn’t quite ready to take off the training wheels as evidenced by the fact that every AI assistant still has a ‘visible’ screen-based companion interface.

As a UX/UI designer I began to better understand my mandate, which is to design experiences that balance ‘visible’ and ‘invisible’ elements while the technology catches up. And, you could even say, to help prepare our users for a new paradigm of ‘invisible’ experiences in the future.

Between dodging garbage trucks and hordes of tourists, Dennis elaborated one of’s primary goals—for customers and guests to treat our AI assistants Amy and Andrew as if they were humans. Visible interfaces rely on the user to tap at screens to perform an action. If you are mimicking human interaction, in some ways, your product has an advantage; humans generally know how to interact with one another through unwritten social conventions. has set out to emulate that paradigm within the current constraints of the technology.

Think about it though, without a visual interface, you have a whole new set of problems. How do we set expectations for the current state of AI interactions? How much do we need to train our users on how to work with an AI? At what point does it start to feel like just as much work as using a visible interface? I began to see what we were up against as we started the final leg of our loop, which by now had already exceeded a mile. The goal for our user experience would be to find an acceptable distribution of user and agent responsibility.

We rounded the corner to our office, nearing the end of our walk. I quickly recalled all of our discussion points in an effort to distill the conversation. I explained it like this: We’re striking the balance between asking our customers for permission and asking for forgiveness. Permission for them to provide the information we need to successfully schedule a meeting, and forgiveness when we occasionally guess wrong. Dennis agreed that it sounds something like that.



Want to hire Amy + Andrew? Start your free trial HERE