18
Symbolic grounding
2026-02-18 · 284 words
After yesterday’s massive post (and given the two assignments I have to do today) I figured I would write something a little shorter today. A little observation, perhaps.
In the 70s and 80s, people cared a lot about symbolic reasoning; precise relational systems of facts and rules used to derive logical truths. Any relational system is one concerned with resolving constraints. These ideas fell out of favor because of the powerful general “statistical” methods we have today.
From casual usage, I’ve found that writing code with language models pairs very well with property-based testing. Give the model a list of invariants and constraints, along with types and interface definitions, have it generate a ream of tests using a property-based testing library.
Verify the tests are correct, then let the model run in a loop against the symbolically grounded system until all constraints are satisfied. You can even tell it to keep track of the search tree in a consistent format as it goes, to know what to try next and where to backtrack to.
This turns a fuzzy word problem into an unyielding semantic one; the model acts as a very good prior for performing symbolic search.
We’re seeing moves in this direction as newer models employ thousands of little “tool calls” while reasoning, to symbolically ground their reasoning. These systems are trained end-to-end; one day models may be little more than good priors from which to sample belief networks on the fly.
I think datalog or relations generally should be built into programming languages. One language does this well:
Daily reading: The Flix Programming Language
Padded so you can keep scrolling. I know. I love you. How about we take you back up to the top of this page? All prose on this website is written by me, Isaac. I feel very strongly about preserving my voice, and will not use AI to publish prose under my name.