In many realms, the higher the price tag the more advanced the feature set. The rule of thumb is more $ = better stuff.

This near direct relationship between the money you spend and the quality of features is, in fact, how companies across sectors justify higher prices, and it’s why you’re willing to pay up.

If I want a more powerful car (400+ hp), I can spend more money to get one.
If I want a bigger office, I can spend more money to get one.
If I want a faster internet connection, I can spend more money to get one.
If I want a more competent (senior) employee, I can spend more money to get one.

This feeling of “better” coming along with “higher price” is something we all believe or at least buy into.

But for some verticals and some solutions, this tight correlation between price and features has changed over time or never existed to begin with.

If I want a more environmentally friendly car, typically, I’d spend more money to get one. This is changing, and it is likely (probably inevitable) that those two curves will diverge. In the future, the most environmentally friendly car you’ll be able to purchase will be the most cost effective one, and probably also the least expensive up front.

When we look to the world of intelligent agents, you can see a similar divergence. Take privacy. Most people consider this a key feature and expect that the most expensive product will do the better job of protecting your data.

But this is not necessarily the case. If you take Amy (our AI personal assistant who schedules meetings for you) as the least expensive option and put her on a continuum with the most expensive option (a human personal assistant), then you see a surprising reversal.

Using Amy costs about $39/month. A human personal assistant costs anywhere from $800, for someone who works virtually, to $7,500, for a full time employee. Blended solutions start at $200 per month.

But I would argue that the human is actually the far greater privacy risk. So in this case, spending more is getting you less of the desired feature (privacy).

Here’s how the continuum looks for scheduling assistants:Cost-privacyTake three basic mistakes that any human, even the most skilled one, makes every so often.

One mistake we’ve all made, to our immediate chagrin, is to REPLY-ALL instead of REPLY to sender only, thereby exposing potentially sensitive information to anyone on the thread.

Another common human error is the AUTO-COMPLETE. As you’re typing a name, your email client AUTO-COMPLETES the name with the wrong recipient, and before you realize it, you send an email to the wrong person, exposing the entire conversation.

There are also mistakes made in simple human to human interactions. A (less seasoned) human assistant might accidentally expose information about you when he needs to let a guest know you’re unavailable (e.g.,”No, Dennis can’t meet tomorrow as he is spending the day at Google.”). A shared personal assistant working for six executives multiplies the exposure and privacy/security risk.

Humans make simple mistakes that expose information to the wrong people; social engineering is notoriously effective; and humans gossip.

You can design against all of the above when building a fully autonomous AI agent. For example, Amy (and her brother Andrew) will never share any information about our customers except the available slots they are offering. They are immune to social engineering, and they don’t gossip.

I find this super interesting (and reassuring), as we’re likely to move towards a future with more, not fewer intelligent agents.


This post originally appeared on LinkedIn Pulse, here.

Hire Amy & Andrew and start your free trial HERE.