How RunMat Works
RunMat is a new, modern runtime for MATLAB code. It implements the full language grammar and core semantics — arrays and indexing, control flow, functions and multiple returns, cells/structs, and classdef OOP — with a fast, V8-inspired engine.
We keep the core slim and blazing fast, then grow the ecosystem with packages. RunMat is a clean, fast runtime for the MATLAB language with an open extension model.
Why legacy approaches are slow
Heavy startup: Large monolithic runtimes load thousands of functions, licensing systems, and global tables on boot. That overhead dominates short-running scripts.
Runtime inefficiencies: Traditional interpreters translate operations line-by-line on every execution. Hot loops are re-interpreted over and over; dispatch overhead swamps the math.
The result? You wait — at startup, at every loop, and whenever the runtime can't optimize to the way your code actually behaves.
RunMat: slim core + adaptive compilation
Inspired by modern JavaScript engines, RunMat starts instantly in a high-performance interpreter and upgrades hot code to native machine code. The core is intentionally small and predictable, so the engine can be ruthlessly optimized.
Phase 1: Instant Execution
Your code runs immediately through our Ignition interpreter — no compilation wait time. Perfect for interactive work, tests, and scripts. While executing, we profile which functions are hot.
Phase 2: Machine Code
Functions that run frequently get upgraded to native machine code via the Turbine JIT: loop optimizations, type-specialized paths, and vectorization tailored to your data and call patterns.
Why this approach wins
The RunMat JIT sees exactly how your code behaves: matrix sizes, datatypes, calling patterns. The 20% of your code that runs 80% of the time gets the full optimization budget.
Language compatibility and semantics
() []
, command-form, and classdef are all accepted by the parser.end
, colon ranges, logical masks); cells and structs; OOP dispatch and subsref
/subsasgn
; try/catch
, global
, persistent
.HIR & AST: foundation for acceleration and tooling
RunMat lowers source to a high-level, typed IR (HIR) with flow-sensitive inference. A stable AST→HIR pipeline gives the engine precise structure and types, which unlocks acceleration and first-class editor tooling.
Seamless Accelerate
Tensor/Matrix ops are explicit in HIR, so the planner can route work to CPU or GPU, fuse common elementwise chains, and support reverse-mode autograd by default. This lets the planner dispatch and manage memory and operations across GPU/CPU/TPU/etc without needing any user code changes.
Great IntelliSense (LSP)
Typed nodes power hover types, signature help, go-to-definition across packages, property/field completion from the class registry, and precise diagnostics. The same HIR backs the interpreter and JIT.
Portable by design
Typed HIR lowers to Cranelift IR, which makes it trivial to target multiple architectures. That makes RunMat platform-agnostic and lightweight: small static binaries, predictable performance on Linux/macOS/Windows or embedded devices, and room for ahead-of-time or cached compilation where it helps startup.
Why this architecture is different
Octave's classic interpreter does not expose a typed IR, limiting deep optimization and IDE tooling. MATLAB has powerful internal IRs but limited external LSP integration. RunMat's stable HIR/ABI and Cranelift backend make accelerators and editors plug in cleanly across platforms.
Performance Characteristics
/benchmarks
; reproduce with ./benchmarks/run_benchmarks.sh
. Note that we cannot benchmark RunMat against MathWorks's MATLAB and ecosystem because installing it requires agreeing to their strict and dense license agreement, which we will not.Deep Dive: How the Magic Happens
The Ignition Interpreter: Speed from Day One
When you type A = [1, 2; 3, 4]
and press enter, here's what happens in microseconds:
- 1. Lexical Analysis: Your MATLAB code gets tokenized into meaningful chunks
- 2. Parsing: Tokens become an Abstract Syntax Tree (AST) representing the structure
- 3. HIR Translation: The AST becomes High-level Intermediate Representation for optimization
- 4. Bytecode Generation: HIR compiles to compact bytecode instructions
- 5. Immediate Execution: Bytecode runs instantly while profiling counters track "hot" functions
This entire pipeline completes in under 5ms — faster than most editors can update their syntax highlighting.
The Turbine JIT: When Performance Really Matters
Once a function gets called enough times (the JIT is configurable, with good defaults), Turbine kicks in:
- • Type Specialization: Generate optimized code paths for your specific data types
- • Loop Optimization: Unroll tight loops and vectorize operations using SIMD instructions
- • Function Inlining: Eliminate call overhead for frequently-used builtins like
sin()
orcos()
- • Memory Layout Optimization: Arrange data structures for cache-friendly access patterns
- • Dead Code Elimination: Remove branches that never execute in your specific use case
The result? Machine code optimized for your specific usage patterns, with specialized paths for your data types and calling conventions.
Memory: The Foundation of Speed
MATLAB's strength has always been matrices (and tensors), so we designed our memory system from the ground up for numerical computing:
Zero-Copy Arrays
Arrays use column-major layout (just like MATLAB) and pass directly to BLAS/LAPACK libraries without any data copying. Large matrix/tensor operations can work directly with the underlying memory without unnecessary data movement.
Copy-on-Write Semantics
When you write B = A
, we don't copy the entire matrix/tensor. Instead, B shares A's memory until you modify one of them. This preserves MATLAB semantics while dramatically reducing memory usage.
Garbage Collection: Smart Memory Management
Scientific computing creates lots of temporary arrays — intermediate results in calculations, temporary matrices in algorithms, etc. Our garbage collector is designed with this pattern in mind:
Generational Collection
Young objects (like temporary calculation results) get collected quickly and frequently. Old objects (like your main data matrices) are scanned rarely via remembered sets, minimizing overhead.
Short, predictable pauses
Minor collections are stop-the-world but fast. Write barriers track old→young edges so minor GCs only scan what changed, not the entire heap.
Instant Startup: 2000x Faster Boot Times
Remember how MATLAB takes 10+ seconds to start? It's a symptom of a larger problem. RunMat solves runtime startup time using modern techniques like snapshotting, lightweight runtime design, and fast compilation.
Lightweight Runtime Architecture
Built from the ground up in Rust with minimal startup overhead. No massive Java runtimes, no complex licensing checks, no bloated legacy code. Just a lean, fast runtime that gets out of your way and lets you focus on your work.
Instant Compilation Pipeline
Our compilation pipeline is designed for speed: fast lexing, efficient parsing, and immediate bytecode execution. Combined with pre-warmed snapshots of the standard library, we eliminate the cold-start penalty that plagues traditional environments.
Modern Plotting: Built for the GPU Era
Traditional MATLAB plotting is CPU-bound and struggles with large datasets. We build a clean, GPU-accelerated plotting system that's fast, beautiful and seamlessly integrates with the rest of the runtime:
GPU-Accelerated Rendering
Built on WebGPU (wgpu) with custom WGSL shaders. Designed to handle large datasets efficiently by leveraging GPU acceleration for rendering, with smooth interaction for scatter plots, line charts, and other visualizations.
Interactive by Default
Zoom, pan, rotate — all built-in and responsive. Level-of-detail rendering means performance stays smooth even when you're exploring huge datasets. Because the best insights often come from interactive exploration.
Packages: extend the runtime without bloating the core
RunMat keeps the core minimal. New functions, types, and accelerators ship as packages — native (Rust) for maximum performance, or source (MATLAB) for portability.
Lightning-Fast REPL
No more waiting for calculations to complete. The interactive shell starts instantly, remembers your variables and functions within your session, and provides fast syntax error detection with clear, helpful error messages.
ans = 1000.0
First-class Jupyter integration
Built for the notebook era. Install RunMat as a Jupyter kernel with one command, and enjoy the same performance in your favorite notebook environment. Rich display support for plots, matrices, and data structures.
✓ RunMat Jupyter kernel installed successfully!
Modern cloud & container deployment
Single static binary with zero dependencies makes deployment trivial. Run MATLAB code in Docker containers, Kubernetes clusters, or cloud functions. No complex runtime environments, no licensing servers — just copy the binary and go.
ans = -2.0
What's Next: The Future is Bright
We're just getting started. Here's what's coming to make scientific computing even better:
Cross-Platform Deployment: Run Anywhere
Compile your MATLAB code to run on any platform — from embedded microcontrollers and edge devices to web browsers via WebAssembly, mobile devices, and cloud infrastructure. Write once in MATLAB, deploy everywhere from IoT sensors to high-performance clusters.
GPU Compute: Massively Parallel Everything
Why limit matrix operations to your CPU? Direct CUDA and ROCm integration will automatically offload heavy computations to your GPU, turning your graphics card into a scientific supercomputer.
Enterprise Freedom: Break the License Prison
No per-seat licensing, no network license servers, no vendor audits. Deploy RunMat across unlimited machines, scale teams without budget explosions, and never face astronomical renewal costs again. True computational freedom for organizations of any size.
How RunMat differs
Ready to Dive Deeper?
Explore RunMat's source code or try it yourself to see this architecture in action.