I was reading about a new brain-inspired chip on the BBC, which suggests new potentially cost effective ways of executing heavy data/compute tasks. They speak of “neurons” and “synapses” as a way to describe it, which is interesting as we can use humans as a reference for what we might want to achieve.
Let’s play with the idea of x.ai using an 8 byte floating point number for each of those connections – as we build an AI network to hold each one of these in memory. The fantasy of making Amy self-aware.
RAM: 86,000,000,000 * 7,000 * 8 bytes = 4.8 Petabytes
A memory optimized machine on Amazon AWS, with 244 Gigabyte of memory and a little compute power to manage it will cost you $2.8 per hour.
Money: (4.8 Petabytes / 244 Gigabytes) * 24h * 30d * $2.8
= $39,659,016 per month (or $476 Million per year)
Going back to the article. Dr Modha suggested that the chip is “endlessly scalable” and that this isn’t a 10-15% improvement. He said. “You’re talking about orders and orders of magnitude.”. One would hope so, because we only invested ~$2M in our seed round 😉