These are some thoughts I’ve had while listening to a Lex Fridman interview with Edward Frenkel, a mathematician at UC Berkeley working on mathematical quantum physics.
In the information age, we like to see everything as computation. But what do we mean when we say that something is computation? We mean that a physical system with predictable interactions has a meaningful result. If we somehow learned that the universe was computational in nature, the only thing that adds is that the universe’s state is meaningful somehow.
Calling something computation is a subjective claim because it depends on whether some subject extracts meaning from the state of the universe. I think I’ve always seen the universe as computational in nature, and to me that implied that everything in it was predictable and understandable. But the universe might be incomprehensible to a human, but comprehensible to another subject. Or it could be incomprehensible to any subject.
Paradoxes and confusion arise from our lenses on reality. We find the wave-particle duality of light interesting because of the distinction between our human notions of wave and particle. Behavior is behavior, and light behaves like light. The duality is in our understanding of light, not the light itself.
There are questions about reality that can’t be answered, mostly around subjectivity. To Frenkel this makes the universe ineffable. Math claims to be objective, but hides its subjectivity in the axioms of the formal system. Gòˆdel’s incompleteness theorem and the halting problem demonstrate that there are facts about the universe that we won’t be able to arrive at rationally. And then there’s all the subjectivity inherent in quantum physics.
We’ve known about a lot of this for about a century, and yet still the culture of science hasn’t really dealt with it. We still talk as though everything is billiard balls, and free will is an illusion, while at the same time acknowledging on paper the fundamental subjectivity of reality, consciousness, etc.
In a conversation after a presentation about alignment, I briefly discussed the idea that our problem with trusting AI to do things is that we can’t be confident that they take in the full context that a human will when making a decision. I then wondered if in the future there could be more powerful systems that could take into account even more context than a human can. What Frenkel says about the subjective nature of reality makes me believe that this is impossible and dangerous to attempt. Who is (wo)man to judge whether a system takes more into account than the human can? By claiming such, the human is in reality simply exercising power by implementing their own idea of reality in the system they’ve created.