
Three IPs.
One Ecosystem.
From NPU IP to EDA design automation and custom SRAM services — Articron covers the complete SRAM-CIM based AI chip design stack.
NPU IP Ecosystem
Memory-Native
Intelligence
Where computation lives inside the memory — eliminating the data movement bottleneck that limits every conventional AI chip.
ART
SRAM-CIM based AI Processor IP — the most power and area efficient NPU architecture for edge AI integration.
Ultra Low Power
Outperforms all other AI accelerators in power efficiency
Programmability
Supports 10+ up-to-date DNN models
User Interface
SDK provided to automatically generate the most suitable IP
Features
Specifications
Target Applications
Process Nodes
Why Processing-in-Memory?
Beyond the Von Neumann Bottleneck
Von Neumann
Separate memory & processing units
Decreased area efficiency
Constant data movement overhead
Limited power & performance
Processing-in-Memory
Integrated memory & processing units
Increased area efficiency
Minimal data movement
Improved power & performance
SRAM IP Ecosystem
Design Automation Meets
Custom Silicon
Dalus automates SRAM design. Puzzle delivers it to your spec. Together they form the fastest path from requirement to silicon-ready SRAM IP.
Dalus
AI-Based SRAM Auto-Design Engine — generating optimized SRAM circuits from high-level specs in a fraction of traditional design time.
Design Flow
Spec
AI Engine
SRAM IP
Puzzle
Custom SRAM IP delivered to your exact specifications — powered by Dalus automation, qualified across leading foundries, and silicon-proven.
Supported Type
Process Nodes
Supported Foundries
