NHacker Next
login
▲The Symbol Grounding Problem (1990)arxiv.org
16 points by Fibra 4 days ago | 4 comments
Loading comments...
MarkusQ 27 minutes ago [-]
This paper annoyed my when it first came out, and still does. Harnad sets arbitrarily high standards for what constitutes a symbol system by I) requiring that the rules must also be fully symbolic, II) everything must be "semantically interpretable", III) omitting _relations_ between symbols from the domain of discourse.

In doing so, he rules out much of mathematics (e.g. matrix multiplication, in which the elements have no meaning in isolation, geometry, where points, lines and lines are explicitly left undefined, formal logic, etc.). This "turtles all the way down" stricture largely creates the problem that he then addresses.

jstrebel 9 hours ago [-]
I absolutely love this paper and it's a shame that this research does not receive more attention. Everybody is raving about LLMs, but also everybody is ignoring the shaky foundations on which they are built (just think of training data poisoning). It is also a shame that there are no real software applications to my knowledge that really implement the iconic and categorical representations and try to build an AI system around it.
lambdaone 6 hours ago [-]
Purely symbolic AI has been tried and found wanting. Decades of research by hundreds of extremely bright people explored a large number of promising-looking approaches to no avail. Intuition tells us thinking is symbolic; the failure of symbolic systems tells us intution is most likely wrong.

What is interesting about current LLM-based systems is that they follow exactly the model suggested by this paper, by bolting together neural systems with symbol manipulation systems - to quote the paper "connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling."

They are clearly also kludges. As you say, they are built on shaky foundations. But the success - at least compared to anything that has gone before - of kludged-together neural/symbolic systems suggests that the approach is more fertile than any that has gone before. They are also still far, far away from the AGI that has been predicted by their most enthusistic proponents.

My best guess is that future successful hard-problem-solving systems will combine neurosymbolic processing with formal theorem provers, where the neurosymbolic layer constructs proposals with candidate proofs, to submit to symbolic provers to test for success.

jstrebel 2 hours ago [-]
I think there is a misunderstanding - the whole point of my comment was that LLMs are lacking sensory input which could link the neural activations to real-world objects and thus provide a grounding of their computations.

I agree with you that purely symbolic AI systems had severe limitations (just think of those expert systems of the past), but the direction must not only go towards higher-level symbolic provers but also towards lower level sensory data integration.