There is a real danger involved in talking about computers as if they are alive (anthropomorphizing). That danger is that the public will come to believe that computers can be aware of what they are doing, and once that belief is planted, they will agree to allow the use of computers in functions that are life-critical.
This way of talking about machines as “aware,” “sentient,” and “responsive” is completely misleading when talking about computer programs, but the use of “understand” to describe what a computer is doing when statistically pattern-matching facial expressions, is perhaps the worst offense. It represents a fundamental error that conflates processed data with understanding. People with masses of knowledge don’t necessarily understand it, while people with understanding don’t necessarily need specific knowledge to be able to accomplish something. But machines have neither.
For example, those working with machine-learning systems are mostly frank about their inability to explain how their systems arrive at conclusions, even though they have masses of data about what these systems are doing. And I much prefer to hire a seasoned professional to fix my car, even if he or she is unfamiliar with my particular model — their understanding, which has been built up over years and decades of practice, enables them to do something machine-learning systems cannot do — improvise.
Creativity is another word being abused when talking about software running on hardware — they are fixed as soon as compiled code is put into physical hardware. There can be no “creativity,” so long as that word describes what humans do when they, understanding a need, create a solution to it.
it is clear that knowledge is no longer power — tech is. Like the ancient human wisdom that it supplanted, knowledge — which has meaning for a sentient human, but not a program — has gone the way of the dodo, replaced by data — masses, oceans, and galaxies of data locked away in pointers, probabilities, and reciprocal links that can be used to predict what will happen, even though the tech can’t explain what it is predicting about, being fully unaware of anything at all.
Rather than watering-down words into meaninglessness, a more honest vocabulary should be developed and used when describing machine-learning and artificial intelligence systems. Perhaps the day will come that these new words are no longer needed, but until that happens we do need them, so that we can make intelligent decisions about the use of these technologies — especially in life-critical systems.