Products
Oenerga's product portfolio is built around a single principle: infrastructure advantage comes from reducing the cost of state movement, memory pressure, and communication at system scale. LUCID delivers that advantage now. AURORA-M extends it into a full replacement architecture.
LUCID and AURORA-M are not variations of the same chip. They are two expressions of the same architectural direction. One is a deployment wedge designed to win where GPU-centric systems become memory-bound and communication-heavy. The other is a full platform architecture for the era after processor-centric AI infrastructure.
LUCID is Oenerga's near-term deployment platform for long-context inference, KV-heavy serving, and transformer workloads where memory movement and scale-up overhead increasingly dominate total cost. It combines dense digital tensor execution with compute-in-memory attention and KV handling, then extends efficiently through an optical scale-up fabric designed for infrastructure-grade deployment.
Why LUCID exists
Conventional GPU clusters are powerful, but their economics deteriorate in the regimes buyers increasingly care about: persistent context, multi-session serving, large active state, and communication-heavy scaling. LUCID exists to create measurable advantage in exactly those regimes.
What problem it solves
LUCID reduces the tax imposed by historical state movement. Instead of treating KV-cache and communication as unavoidable overhead, it treats them as design targets. That changes the rack-level outcome.
How LUCID works
Each component is designed for the others. The integration across all four is what creates the performance outcome.
Why LUCID matters commercially
LUCID is the deployment-oriented product in Oenerga's portfolio. It is designed to create practical economic advantage without asking customers to wait for a complete infrastructure reset. It gives buyers a path to pilot, validate, and deploy memory-native architecture in the workloads where conventional systems already show stress.
Ideal workloads
Ideal buyers
Pilot path
LUCID is intended as the first operational step into Oenerga's architecture. It enables customer-specific evaluation under real serving conditions, with a benchmark methodology focused on wall power, throughput per rack, and memory-heavy workload relevance.
AURORA-M is Oenerga's state-native architecture for the post-GPU era. It is designed around a simple but consequential shift: in modern AI infrastructure, moving state has become expensive enough that the architecture itself must be redesigned around memory locality, communication efficiency, and workload persistence.
Why full replacement is necessary
Incremental improvements to processor-centric systems can extend useful life, but they do not remove the underlying tax of memory movement and communication overhead in state-heavy workloads. AURORA-M was built to move beyond that model.
State-native execution
AURORA-M treats active model state as a primary architectural object, not a secondary consequence of processor execution. It is designed so that the system computes where important state lives, reduces unnecessary transfers, and scales through a communication model chosen for infrastructure reality rather than legacy habit.
Core architectural blocks
The blocks are co-designed. Their combined behavior is the product's economic advantage.
Why AURORA-M changes economics
AURORA-M is not meant to be a slightly different accelerator card. It is designed to alter the cost structure of AI infrastructure by attacking the memory and communication terms that increasingly dominate deployment economics. That creates value at rack level, cluster level, and ultimately at the infrastructure strategy level.
Who it is built for
Why AURORA-M is strategic
AURORA-M represents a platform position, not a component feature. For buyers and partners thinking in years rather than quarters, it offers a path toward AI infrastructure that is structurally better aligned with long-context serving, persistent state, and scale-up realities.
Engage on AURORA-M
AURORA-M is designed for organizations planning infrastructure at a multi-year horizon and want to understand the economics, architecture, and transition path of a genuinely post-GPU AI infrastructure platform.
Two products. Same architectural direction. Different deployment stages and strategic purposes.
| Dimension | LUCID — Memory-Native Supernode | AURORA-M — Full GPU-Replacement Platform |
|---|---|---|
| Architecture model | Memory-native compute wedge. Dense tensor execution combined with CIM attention and KV handling, optical scale-up, chiplet integration. | Full state-native platform. Active model state is the primary architectural object. All four blocks co-designed for memory locality and communication efficiency. |
| State handling | CIM-Augmented CIM chiplets handle attention and KV state. Dense tensor plane handles arithmetic. State movement reduced for transformer hotspots. |
State-Native Entire execution model built around state locality. Active state treated as a first-class architectural object, not a consequence of processor execution. |
| Communication design | Optical scale-up fabric improves scale-up efficiency where electrical communication becomes expensive in power and complexity. | Optical memory fabric — communication and memory interaction co-designed for rack-scale and multi-node efficiency without exponential overhead. |
| Long-context efficiency | Primary strength CIM-KV handling eliminates the transfer overhead that bounds GPU cluster performance at long context. |
Structural advantage Memory locality across the entire execution model makes long-context performance a structural property of the architecture. |
| Rack economics | Measurable improvement in energy per token and throughput per rack for memory-bound workloads. Designed for auditable comparison against incumbent systems. | Designed to alter the cost structure of AI infrastructure — addressing memory movement and communication terms that increasingly dominate deployment economics. |
| Deployment style | Near-term deployment Designed for pilot, validate, and deploy workflow. Infrastructure-grade integration without a complete architectural reset. |
Strategic transition Platform for organizations planning multi-year infrastructure roadmaps. Full replacement architecture for the post-GPU era. |
| Strategic relevance | Deployment wedge into memory-native architecture. Entry point for Oenerga's broader platform. Demonstrates economic advantage in production workloads. | Platform position. Relevant to infrastructure strategy, roadmap ownership, and strategic technology acquisition — not only near-term procurement. |
| Defensibility | Integration of CIM execution, optical scale-up, chiplet architecture, and workload mapping software. Each layer valuable; integration creates compounding moat. | Architecture + packaging + execution model + communication design + system software acting together. Five compounding layers of technical defensibility. |
Oenerga can support pilot evaluation, architecture review, and confidential executive briefing for qualified organizations.