StillJustJames
1 min readApr 12, 2022

--

Technology is Science made manifest. But note how devoid of subtlety that sentence is. Technology is not monolithically bad because it is a class of human activity and not an entity, and it's the same with Science, which is also not an entity, it's a class of human activity. So, Science doesn't 'say' anything -- scientists do (and most of it is just 'sciencecum', and not substantive, let alone reproducible). The problem is that as a class, scientists disavow moral responsibility for how their found knowledge is used. The same with technologists -- they don't acknowledge any moral responsibility. And it is here, in my opinion, that efforts must be made to reintroduce the necessity of moral responsibility into these classes of human activity -- as it has been in many others. And the foundation upon which that moral responsibility must rest is recognition of human fallibility, greed, and hubris. The examples of AI/ML failures you present are clearly product failures and should fall under product liability laws at a minimum. Better yet would be to have the faulty design/implementation fall under involuntary manslaughter law, or worse, for the people -- not corporation -- directly responsible. Otherwise, I don't see anything improving, and the dark future you foreshadow will be our children's future.

--

--

StillJustJames
StillJustJames

Written by StillJustJames

There is a way of seeing the world different. Discover the Responsive Naturing all around you, and learn the Path of Great Responsiveness Meditation.

Responses (2)