“I would say LLMs possess intelligence — isn’t that the whole point?”
That’s the whole error. Without sentience, intelligence isn’t possible, unless you downgrade the meaning of the word to ‘having processed data’. Yeah, then even a thermometer is intelligent. “Hey Thermo! What’s the temperature outside? … oh yeah, I have to look… oh, cool! The thermometer says it’s a balmy 105 degrees today. Excellent! Thanks Thermo!”
“It’s possible that this could be one of the prerequisites for sentience — the ability to update neuronal weights as time passes.”
This cannot be the source of sentience, because it requires it, unless you start at some arbitrary stage of development, which required the mysterious presence of sentience to begin with, and call it “the beginning.” Yes, then you can show the carnival guests ‘how’ sentience arises.
“Reinforcement learning models that shuttle information to-and-from LaMDA would presumably solve this.”
This just sweeps the problem under the rug, or rather it ignores the intelligence of the sentient beings who design and feed the models. I purposely left “learning” off there. They aren’t ‘models that learn’. What they do is process data according to instructions encoded by the software engineers who programmed the models that run on the hardware designed by other sentient beings who engineered the hardware, including the unchangeable, inextensible hardware instruction set that defines the complete capability repertoire of that hardware, encoded in silicon, and which ultimately is where the sentience would have to arise — not in the model running on it — for this contrivance to ever actually be intelligent. Otherwise, it’s just a ‘mechanical turk’ (no not Amazon’s — way before that).
“Biological neurons have a wealth of subcellular and quantum processes that artificial ones do not model.”
Including one’s that we didn’t know about yesterday, and perhaps others that we will learn about tomorrow. See: “An
efficient dendritic learning
as an alternative to synaptic
plasticity hypothesis” https://doi.org/10.1038/s41598-022-10466-8
“There are many similarities between artificial and biological brains, and it’s not a far cry whatsoever to wonder if the former might grow sentient with scale.”
There is an insurmountable problem that your optimistic statement doesn’t acknowledge: sentience has to come first, otherwise it is unexplainable. Using all the tools of neuroscience will never bring you to the point of explaining it. At best, in exhaustion, neuroscientists will claim that the neurological correlates of consciousness are all the proof that is needed, throw their hands up, declare victory, and go home — because, they know sentience has to arise from matter!
Why does sentience have to come first? Because what that word points to — the understanding of this phenomenon called sentience — always already assumes itself.
You can say it arose from an infinite number of random mutations over an infinite amount of time, but that will never be an explanation. It will, again, be just an explaining away of the problem. It doesn’t even explain why the mutation hangs around, if it arose from random mutations, which the very next moment might randomly mutate away.
Before sentience, no perceptual mutation — like a light-sensing cell (hahaha) — would provide a benefit that would lead to better survivability. Thus, no selectivity for such an unknown. A light-sensing cell, without sentience, would be indistinguishable from any other cell — and provide no benefit beyond that of the other cells.
You can even argue that the light-sensing cell was ‘frozen in time’ until sentience arose to use it. But as they say, correlation is not causation.
Ultimately, you might be tempted to say that they both arose together… but correlation is not causation.
But it doesn’t rule it out. So, you would have to discover some event, process, or magical dual-natured particle that hit in just the right way, in just the right moment, that imparted perception and sentience to a cell. A happy coincidence perhaps, but one that would needfully be replicated frequently enough to account for the infinite range of sentience found all over this tiny stone we live on. Good luck with that. Perhaps a ‘GodHead’ particle?!?
One day perhaps, you, your readers, some other neuroscientist(s), might finally accept that whatever sentience is, it must be primordial. That will be a big day in modern scientific practice.
Unfortunately, you will have wasted lifetimes because of erroneous assumptions. And still be thousands of years behind other scientists, (not modern ones crippled by these bad assumptions about what sentience must be), who saw the handwriting on wall.