PIONEERING AI RESEARCH

Aleph²The 2nd Order Of Intelligence

Where mathematical elegance meets infinite possibility. Advancing the boundaries of efficient AI architectures.

ABOUT

Thinking in the
Embedding Space

PoT (Pointer-over-Heads Transformer) is built around a simple idea: instead of producing output in one forward pass, the model thinks through its representations over several refinement steps.

At every step, the model looks at its current hidden states and asks: "Given what I know now, how should I use my attention heads to refine this understanding?"

This process is not about memorizing — it's about progressive self-correction. PoT doesn't just compute token embeddings — it thinks within them.

Iterative Refinement

Apply the transformer stack R times for multi-step reasoning. By the final iteration, embeddings encode a richer, more internally consistent view.

Two-Timescale Controller

A fast component adapts every step. A slow component maintains broader contextual plans, forming hierarchical reasoning.

ARCHITECTURE

Pointer-Over-Heads Transformer

Dynamic-Routing Transformer with Iterative Refinement

Head-Wise Routing

Dynamically select or weight attention heads per token via differentiable softmax.

Controller

Two-timescale recurrent modules for fast adaptation and strategic planning.

PoH Block Architecture

Controller → α weights → Weighted Multi-Head Attention → SwiGLU FFN.

Inner Thinking Cycles

16 total reasoning steps with dynamic head routing at each cycle.

Multiple Controllers

Supports Transformer, Mamba, and Diffusion depth controllers.

Feature Injection

6 modes including broadcast, film, depth_token, and alpha_gated.

LIVE DEMO

See PoT in Action

Experience inner-thinking cycles with our interactive Sudoku solver

9x9 Sudoku Solver

PoT Sudoku Solver

Watch the model think through multiple refinement cycles, adjusting its reasoning in real-time. All reasoning happens in the embedding space — no chain-of-thought tokens required.

Launch Demo
VISION

The Gap is Real

The missing part in ChatGPT is real. The GAP is real — and it can be quantified.

As I showed in the demo: 20 million parameters on CPU beat any ChatGPT that exists today on B200s GPU at inference time.

That gap by itself constitutes a new field — Symbol — that can derivate across all markets: from Symbolic Robot tasks to Symbolic Routers.

Discovered a new "Equivalence Class" of bizarre symbolic scientific aliens that would impact the market.

Symbolic AI visualization
20M
Parameters on CPU
vs 1.7T
ChatGPT Parameters
Symbol
New AI Field
FOUNDER
Eran Ben Artzy

Eran Ben Artzy

Founder & Researcher

Pioneering the next generation of efficient AI architectures. Focused on bridging the gap between massive language models and lightweight symbolic reasoning systems that outperform at a fraction of the cost.

CONTACT

Get in Touch

Ready to explore the second order of intelligence?