Hi HN, I'm Alex. Xi is a research language built around one idea: what if we remove the constraint that programs must be human-readable text?
The result is a binary graph format with exactly 10 primitive constructs (λ, @, Π, Σ, , μ, ι, #, !, var). These 10 are mathematically sufficient to express any computation, any data type, and any proof — similar to how the Calculus of Inductive Constructions works, but encoded as a content-addressed directed graph instead of syntax.
What's in the repo:
- Formal specification of the 10 primitives
- Reference interpreter + type checker in Python (~2k LOC)
- Pattern matching (ι-elimination) over inductive types
- Content-addressed module system (SHA-256)
- 144 tests, example programs (Fibonacci, factorial, Church numerals, an expression evaluator written in Xi)
- FPGA prototype sketch + Lean 4 formalization of the core type system
- Docker support: `docker run xi-lang demo`
Why not text? Three reasons:
1. AI generation — no syntax errors possible. The graph is always well-formed by construction.
2. Content addressing — every node is its SHA-256 hash. Structural equality = byte equality. Global deduplication is free.
3. Hardware — a graph reduction processor can execute Xi directly. No parsing, no compilation, just pointer chasing on silicon.
The tradeoff is obvious: humans can't read Xi directly. You use visualization tools (included) like you'd use a circuit viewer for GDSII.
This is a research project, not a production language. I'd love feedback from the PLT community — especially on the choice of 10 primitives and whether the inductive type encoding is complete enough.
Hi HN, I'm Alex. Xi is a research language built around one idea: what if we remove the constraint that programs must be human-readable text?
The result is a binary graph format with exactly 10 primitive constructs (λ, @, Π, Σ, , μ, ι, #, !, var). These 10 are mathematically sufficient to express any computation, any data type, and any proof — similar to how the Calculus of Inductive Constructions works, but encoded as a content-addressed directed graph instead of syntax.
What's in the repo:
- Formal specification of the 10 primitives - Reference interpreter + type checker in Python (~2k LOC) - Pattern matching (ι-elimination) over inductive types - Content-addressed module system (SHA-256) - 144 tests, example programs (Fibonacci, factorial, Church numerals, an expression evaluator written in Xi) - FPGA prototype sketch + Lean 4 formalization of the core type system - Docker support: `docker run xi-lang demo`
Why not text? Three reasons:
1. AI generation — no syntax errors possible. The graph is always well-formed by construction. 2. Content addressing — every node is its SHA-256 hash. Structural equality = byte equality. Global deduplication is free. 3. Hardware — a graph reduction processor can execute Xi directly. No parsing, no compilation, just pointer chasing on silicon.
The tradeoff is obvious: humans can't read Xi directly. You use visualization tools (included) like you'd use a circuit viewer for GDSII.
This is a research project, not a production language. I'd love feedback from the PLT community — especially on the choice of 10 primitives and whether the inductive type encoding is complete enough.