Ingressing hypotheses
Been noodling on some thoughts after listening to Mike's chat with Iain McGilchrist.
How do we generate good hypotheses, skipping all the bad ones?
How do intuitive people/cells seem to, occasionally, go straight to a correct answer in the face of ~infinite choice?
It can be important to avoid brute forcing and generating lots of bad hypotheses, among other reasons, bc:
- Not only are bad hypotheses incorrect, they're a liability. They might lead you astray.
- We're bottlenecked by the number of experiments we can run. By compute, lab resources, number of participants, etc.
That's why it was so important for Alpha Go to do tree search to choose a subset of potential moves to simulate. It wouldn't have been computationally feasible to simulate all moves.
So. In a similar way to how nature exploits free lunches like the golden ratio, fractals, etc., I'm wondering if we could build a hypothesis generator that taps into these patterns?
How might we utilize this latent/platonic space of patterns?
A couple quick ideas..
Maybe we build an agent who's paths for generating and iterating on hypotheses are fractal—like when it found an idea that it was confident was likely useful/truthy, it could spin off new branches to explore, trying to stay close to/build on known, useful, patterns. Similar to beam search.
Or maybe we train an encoder that was explicitly trained to capture information about these platonic patterns?
Here's part of the Ingressing Minds abstract translated for our use case of using these patterns for hypothesis generation.
Are there patterns that nature and humans haven't discovered that we might find, doing something like this in computers?
Obviously the latent space of language models has a lot of information about these patterns, but it's not optimized for understanding and exploiting the patterns.