SuperRAM DRAM memory expansion

The SuperRAM IP Core implements a hardware accelerator for zram/zswap compression and decompression. Compute-intensive software-based compression is offloaded from the host to the IP, delivering high compression performance at unmatched power efficiency.

Overview

The SuperRAM IP Core implements a hardware accelerator for zram/zswap compression and decompression. Compute-intensive software-based compression is offloaded from the host to the IP, delivering high compression performance at unmatched power efficiency.

Standards
  • Hardware accelerated zram/zswap

  • Compression: LZ4, ZID (proprietary)

  • Interface: AXI4, CHI

Architecture
  • Modular architecture, enables scalability to meet customer throughput requirements

  • Architectural configuration parameters accessible to fine tune performance

HDL Source Licenses
  • Synthesizable System Verilog RTL (encrypted)

  • Implementation constraints

  • UVM testbench (self-checking)

  • Vectors for testbench and expected results

  • User Documentation

Features
  • Turn key solution: compression, compaction, memory management

  • Transparent addressing to operating system and applications

  • Operates on page granularity to enable high compression performance

Deliverables
  • Performance evaluation license C++ compression model for integration in customer performance simulation model

FPGA evaluation license
  • Encrypted IP delivery (Xilinx)

Applications

Smart devices: The product benefit is a faster user experience at unmatched power efficiency when the host processor is offloaded and the page swapping is hardware accelerated. 

Servers: The product benefit is more system performance at less power when the host is off-loaded with the SW-based compression. Operating system and hypervisor offload SW-based compression of swapped pages with a super-fast page compression technology and return more performance to the guest at unmatched power efficiency.

Integration

SuperRAM is integrated on the SoC as a hardware accelerator, a master node on the SoC interconnect. Part of the integration includes a software driver so that the zram/zswap crypto-compress API sends a command to the SuperRAM accelerator when there is a software-triggered compression and decompression.

Benefits

High performance and low latency hardware accelerated compression at unmatched power efficiency. Off-loading CPU – More cycles to released to user work loads. Power efficiency – Less energy. Speed – Fast compression and low latency access. Several in-flight compression and decompression operation operating in parallel. Operating at main memory speed and throughput. Compatible with AXI4/CHI, both 128-b and 256-b bus interface. Intelligent real-time analysis and tuning of the IP Block.

Performance / KPI

FeaturePerformance
Compression ratio:2-4x across diverse data sets
Compression throughput:8 GB/s
Decompression throughput:10 GB/s
Frequency:DDR4/DDR5 DRAM speed
IP area:Starting at 0.1mm2 (@5nm TSMC)
Memory technologies supported:(LP)DDR4, (LP)DDR5, HBM

Cache MX

The Cache MX compression solution increases the cache capacity by 2x at an 80% area and power saving to comparable SRAM capacity.

SuperRAM

High performance and low latency hardware accelerated compression at unmatched power efficiency.

Ziptilion™ BW

Delivers up to 25% more (LP)DDR bandwidth at nominal frequency and power, enabling a significantly more performance and energy efficient SoC.

DenseMem

Double the CXL connected memory capacity with data DenseMem.

Flash MX

Extend NvMe storage capacity 2-4x with LZ4 or zstd hardware accelerated compression.

SphinX

High Performance and Low Latency AES-XTS industry-standard encryption / decryption. Independent non-blocking encryption and decryption channels.