Only partway through the article but it was a small shock when the word 'rodent' turned up unexpectedly:
"...later, if the rodent reenters that place, the cell will fire"
Totally fair and normal of course I just had been imagining human or generic neurons/dendrites up to that point. The test species wasn't mentioned earlier as far as I can see!
mobeets 26 seconds ago [-]
Good point, this is probably the author assuming a little more context on the reader’s part. The cases where you can record from neurons in humans are very rare (basically only in treatment resistant epilepsy), and most of the work on hippocampus uses rodents
in-silico 3 hours ago [-]
Earlier in the article:
> In 2014, when Magee attached electrodes to rodents to record their neural activity,
largbae 10 hours ago [-]
It seems obvious that a humanoid robot system or other truly general-purpose AI will need a stack of model types that work in concert. An LLM could be analogous to the conscious part of our brains, while many smaller and possibly frequently updateable models might provide "muscle memory" and reflexes.
If that becomes the case, then similarly built humanoid robots might have differentiated capabilities depending on their experience, just like us.
idiotsecant 13 minutes ago [-]
I think the LLM is more like the 'internal monologue'. I am quite unqualified to claim this since I don't have one as far as I can tell, but I understand it's constantly observing and providing 'first draft' thinking approximating LLM quality
harrall 6 hours ago [-]
An LLM is more like the unconscious part of my brain. It’s my gut. It shits out answers using an ungodly amount of parallel processing and it’s often right.
But it also hallucinates thoughts and beliefs too, and that’s where the conscious parts have to intervene.
But the conscious parts are expensive to run and I can’t multi-task that.
The conscious parts also degrade first when I don’t get enough sleep.
6 hours ago [-]
Zababa 1 hours ago [-]
>It seems obvious that a humanoid robot system or other truly general-purpose AI will need a stack of model types that work in concert.
I don't think that much of AI today is obvious, so I'm suspicious of anything that is "obvious" about the future.
balamatom 6 hours ago [-]
OK AI user.
Did it truly take someone else to externalize the mechanics of cognition into a machine for you, for you to become able to notice them and become interested in them?
And then to remain focused on the machine that you see, rather than the machine that you are.
"...later, if the rodent reenters that place, the cell will fire"
Totally fair and normal of course I just had been imagining human or generic neurons/dendrites up to that point. The test species wasn't mentioned earlier as far as I can see!
> In 2014, when Magee attached electrodes to rodents to record their neural activity,
If that becomes the case, then similarly built humanoid robots might have differentiated capabilities depending on their experience, just like us.
But it also hallucinates thoughts and beliefs too, and that’s where the conscious parts have to intervene.
But the conscious parts are expensive to run and I can’t multi-task that.
The conscious parts also degrade first when I don’t get enough sleep.
I don't think that much of AI today is obvious, so I'm suspicious of anything that is "obvious" about the future.
Did it truly take someone else to externalize the mechanics of cognition into a machine for you, for you to become able to notice them and become interested in them?
And then to remain focused on the machine that you see, rather than the machine that you are.
Pitiful.