It’s possible that deep learning can do this, but I imagine that it will have to always be particular to the individual because (as I will suggest when I get to the “Mind” section of the Proem) our perceptions can not be operating the way we believe they do.
Our perceptions are mental, i.e., we hear our inner conversation because it arises in the “mind,” (that’s a dangerous concept by the way) but it arises in response to conditions created by what is happening in our body, such as ‘speech’ manifesting as brain activity.
But this isn’t really perceived, because that particular conceptual structure never gets us anywhere — which is why we’ve never been able to scientifically determine what “consciousness” is.
Rather, the brain activity is “known,” immediately, not meditated by some kind of reception or recording of the activity the way our technology will need to do it, but by… yeah, there isn’t any good ways to directly say it. Instead, I use the “piezoelectric effect” as an analogy for what “knowing” means. You can look it up, or wait for the Mind section of the Proem to be published. The fundamental tenet in this understanding though is that “awareness” is just a synonym for “knowing” and “naturing,” and thus, “perceiving” as well. Note that abstraction, “awareness,” is equated to a bunch of verbs here, and also to “time” in the “Time and Eternity” section of the Proem. These are all different ways to point to the same event.
So it’s the same with that inner conversation because thinking, i.e., the arising of thoughts, is the knowing/naturing/awareness of that “inner conversation” as brain activity. It’s not that meaning is encoded in the electrical/chemical “signaling,” as we nowadays believe to be the case because that is how we program computers to do it, it is expressed in the electrical/chemical activity of certain parts of the body (including the brain, of course), and thus, since we each have differences in our bodies (and brains), deep learning will have to be fairly specific to individuals.
I looked at the article you linked to, and it seems like just the very grossest level of activity and location are being interpreted at this point. They have a long way to go to get the scale of sensing they will need, IMHO.