Most people still subscribe to the notion that machines only follow cold, hard logic, while humans can go beyond that, be creative and have emotions. However, the reality is of course that the laws of physics apply to artificial as well as to natural intelligence.

In the American drama series “Lie To Me” the eccentric genius Dr. Cal Lightman solves crime cases by analyzing vocal, body language and facial micro expressions of suspects. While the character is fictional, he is based on the American psychologist Dr. Paul Ekman, who travelled the world analyzing human expressions, classifying over ten thousand of them. Unsurprisingly, he is one of the best lie detectors in the world. Something you might want to remember, because in the near future, we will all be surrounded by invisible “Dr. Ekmans”.

Toward a transparent society

As science fiction author David Brin once observed, cameras get smaller, cheaper, more numerous and more mobile every year. Today, you can install a surveillance camera in almost any public space as far as the law allows. Chances are that you have already been recorded on hours of surveillance footage, and caught in the background of countless pictures by random strangers. At the same time, facial recognition, emotion reading and vocal analysis technologies already exist, albeit at various stages of development. Most people remain unaware of this, simply because these technologies are not widespread and are still at an early stage, and because they are usually not advertised aggressively to the public. For example, when calling customer service, you have probably already heard something like “this call may be recorded or monitored for quality and training purposes”. But you may not be aware that this might mean that your voice is being analyzed in real time to assist the operator. But if two start-up founders in their twenties can link faces in real-time to social media accounts, I’m pretty confident that others can do this as well. But with any such nascent technology, legal and ethical concerns may limit the use of such applications for some time yet.

It seems inevitable that digital and physical identity will merge into one at some point. And for many purposes this will be a good thing. For example, Australia has recently started to look at replacing passport control with facial recognition technology. If body markers such as facial recognition can provide authentication, the paying experience could become completely seamless. You would never have to worry again about forgetting your keys or your wallet, since you conveniently would always have yourself with you at all times.

Eventually, affective computing as it is known could be integrated into smart assistants that provide real-time interaction and analysis and increase your awareness of what your conversational partner thinks and feels, or if they appear disingenuous. This might provide an enhancement in many situations and bridge gaps between different modes of communication and expression. However, there might also be serious downsides. We can usually remain superficially polite toward people we have an antipathy to. But in the future, microexpressions could betray our inner thoughts and feelings to others and pose a barrier toward further collaboration, especially if we don’t learn to be more tolerant of temporary negative feelings toward us. Furthermore, if digital assistants told you how best to communicate with another person in real-time and vice versa, the whole point of human interaction is called into question.

Hyperindividualization

The recent political upset in the United States produced a lot of discussions about the role of social media and the filter bubble. However, if we look at the developments in artificial creativity and affective computing, creating your own stream of news articles is just the beginning. In the future, almost everything will be tailored to you from personalized medicine, personalized foods, to “Minority Report” style adverts. Even if you bought the same book as someone else you might literally read different things. Artificial intelligence could learn your preferences and even incorporate real-time emotional reactions to personalize not just books, but also music, education, games or movies. Whereas today we may live in bubbles of people broadly agreeing with our political ideas, in the future we may live increasingly in a bubble of our own.

Rather than continuously having to adjust to the needs of other people, human-aware artificial intelligence could be a personalized discussion partner that can adjust its tone, voice, appearance and content towards fulfilling your needs. That would be great in customer service or financial advice, but automated empathy will hardly stop there. Will we still be willing to deal with the effort and frustration of putting up with real people in our lives? Or will personalized AI also become our friend and lover as depicted in the movie “Her”?

Does paradise contain people at all?

At some point, people might ask why they should share the same reality with others at all. It’s still quite remote, but far from crazy to assume that at some point we will all be able to live in virtual realities. Whether that will happen through external visual aids, direct brain-computer interfaces or even through uploading our minds onto a new substrate, as some imagine. In the real world, wealth is limited and we can't all be famous rock stars with a Ferrari, and a villa on a private beach. In a virtual environment all this becomes possible.

So, is escapism into solipsistic simulations our future? It’s beyond the scope of this article to discuss such fundamental questions, but as a society we inevitably have to think about the social implications of new technologies sooner or later.