Are Smart Assistants Really Going to Be Human-Like in the Future?

Image Source:

Image Source:

When we talk about AI these days we think about smartphones and how their manufacturers all are making plans to make their devices smarter. For example, LG’s AI initiative Thinq, Amazon’s Alexa, or even Apple’s Siri, just to name a few. The devices of these so-called virtual assistants are glorified speakers that are cybersecurity hazards (that got a lot of tinfoil hatters folding more tinfoil). They can take orders and follow these orders well, or they should be able to (because Thinq’s demonstration in CES was kind of a failure).

There is no doubt that having a smart computer assisting us in our tasks would be greatly beneficial. But the question is “are they really going to be as smart as humans in the future?” This is what the AI scientists promised us after all.

How AIs learn

To answer that we should first identify how AIs learn:

Bottom Up (Deep Learning)

The first method is the Deep Learning method. Here, the AI is trained to recognize patterns by having them learn the abstract representation of their data by using layers of nonlinear processing units. For example, someone inputted an “A”. The AI learns about the features of the input and forms an abstract representation of “A” using its pixels, for example. Gathering all these representations of “A,” the AI gains knowledge and continuously collects and organize the data they collect. If a user inputs another image of “A” with the command for the AI to identify it, the AI then compares the new input with the collected data to see if they somehow match. The AI also collects the abstract data of the second input for future reference.

Image Source: Apple

Apple’s A11 Bionic chip, which Apple designed for AI (Image Source: Apple)

Top Down (Bayesian Method)

The second method uses the features and attributes of an object for the AI to recognize. It has only that definition or abstract idea of what an object is inside its mind and uses that to compare what the input was. For example, let’s input “A” again. Inside the mind of the AI, it goes like that is: when you input the “A” the computer analyzes the input and ask “is this figure an acute angle that’s connected by a crossbar?” From there it can recognize what an “A” looks like. The Bayesian method also allows the AI to create an idea how an “A” would look. It doesn’t matter how weird the “A” looks like. As long as it matches the description that’s stored in its mind, then it’s an “A.”

What AIs Need

Mind you, this is just an example of how they work. They can apply the same logic to anything: words, sentences, and images. Based on the processes they programmed into the AI, the AI sorts all the pieces of information it can gather from the data fed to them. As the AI gathers more data it can work with, it grows smarter. This is “learning” and it is similar to how humans learn.

Qualcomm Snapdragon 845 (Image Source: PC Mag)

The Qualcomm Snapdragon 845, used in a lot of smart devices (Image Source: PC Mag)

So if this is how computers learn, almost nearly how like we learn, how come they still aren’t smart as us? Three things: first, AIs today are still limited by the physical components that let them think. The human brain could store up to an estimated 2.5 petabytes (2.5 million GB) of data. Devices like smartphones can only have 4 GB of RAM with 250+GB of storage. The best household computer in the world can have 2TB (1000 GB) of storage in a Raid while running at 64GB RAM. And sometimes we dump off data (forget) so our brains are never always used at full capacity. And then there’s the sense of self and a conscience. How will AIs ever learn the concept of who they are, like how our Greek ancestors did thousands of years ago? Finally, there are emotions, said to be vital to a human’s intelligence. Without it, computers cannot feel and if they don’t feel they cannot understand.


So to answer the original question, “are smart assistants going to be as smart as humans?” Yes, BUT only if we are able to fulfil all the requirements. If we don’t then, clearly, the answer is “No.”