In the last couple of weeks I’ve been growingly critical towards the humanization of AI. Last week I briefly mentioned an observation that from the inception of the Artificial Intelligence (and particularly during the early periods), the idea of what AI is are closely related to how we conceive human intelligence. In other words, to be intelligent were roughly depicted as having a mind, which is also how human rise above all other animals to be the “special species”.
The need to project human-like agency in artificial intelligence perhaps is a natural and intuitive sentiment, as much as we tend to identify human face from anthropomorphic objects.
This is perhaps part of the process of pattern recognition from our very own neural networks – not just in image recognition but also in language processing and other forms of perception that we deem distinctively human.
Amazon Echo’s voice recognition and feedback – creates an illusion of humanization but in fact it’s a complex system of algorithms that uses statistic models to create achieve the tasks assigned.Amazon Echo’s voice recognition and feedback – creates an illusion of humanization but in fact it’s a complex system of algorithms that uses statistic models to create or achieve the tasks assigned. Although it seems obvious that anything close to “consciousness” is unlikely to exists among current AIs, the danger of it is that the closer AI functions as a somewhat autonomous agency, the more it leads human to relate to it and project emotions.
The purpose of an interface originally was to enable easy access to the general public, but as the network of technology gets more and more complex and intertwined, the it’s growingly become a massive facade of simplicity (to simplify people life in the case of digital assistants) that disguises the potential problem of similar scale (of the network). As is disclosed in Anatomy of an AI System by Kate Crawford and Viadan Joler, there are massive costs of labor, resources, production,and even transportation in the physical infrastructure in order to create this “Blackbox” effect that intentionally mystify what it actually is. And the motive of these major Internet companies are generally profit-driven.
At then end of the day, the merchants profit from the massive information imbalance between the general public and the academia/technicians. The Sci-fi vision of future where we’re surrounded by highly automated tools is yet another commodity that profit from ignorance. Ironically, from a personal experience the tech-savvy people I know tend to be more skeptical and critical towards smart assistants of any sort as a shrewd protection of personal data and security.
Perhaps the most obvious solution would be to raise awareness from the general public via pop-culture influences:films, podcasts, youtube channels etc. But our information in-take has already been so shaped by the internet and all the algorithms that dictates what to feed us with, it seems like the intention to spread information and reach a broader audience is leading to but another blackbox where insulated chambers co-exist.