SK hynix has started mass-producing SOCAMM2, a next-gen memory module for AI servers. Optimized for Nvidia's Vera Rubin platform, it offers double the bandwidth and 75% more power efficiency than conventional DDR5, boosting AI performance.

SK hynix, on Monday, announced that it began the mass production of SOCAMM2, a next-generation memory module developed to increase performance and power efficiency in artificial intelligence servers.

Add Asianet Newsable as a Preferred SourcegooglePreferred

This new Small Outline Compression Attached Memory Module 2 is optimized for Nvidia's upcoming Vera Rubin platform, according to a report by The Korea Herald, signaling a deeper level of technical collaboration between the two companies.

The SOCAMM2 module integrates 192 gigabytes of memory using sixth-generation 10-nanometer LPDDR5X DRAM. While traditional server modules typically rely on standard DDR5, this specific design vertically stacks LPDDR chips to improve energy efficiency while maintaining the high performance required for modern AI workloads.

Enhanced Performance and Power Efficiency

"We expect SOCAMM2 to fundamentally address memory bottlenecks in training and inference for large language models with hundreds of billions of parameters, significantly accelerating overall system performance," the report quoted SK hynix.

Data provided by the company indicated that SOCAMM2 delivers more than twice the bandwidth and over 75 per cent greater power efficiency when compared with conventional DDR5 RDIMM modules. This makes the hardware suitable for high-performance AI tasks.

Data transfer speeds reached 9.6 gigabits per second, an increase from the 8.5 Gbps recorded in the previous SOCAMM1 generation. The module also features a higher number of input and output pins to enhance total data throughput. These technical improvements are expected to reduce the total cost of ownership for hyperscale data center operators. In these environments, investment decisions depend on rack-level performance, power consumption, and cooling requirements rather than just the cost of individual components.

Strategic Design and Market Positioning

The report noted that while SOCAMM does not reach the ultra-high bandwidth levels of High Bandwidth Memory, its architecture allows for a simpler manufacturing process and higher yields, which provides a cost advantage on a per-capacity basis.

Intermediate Memory Layer

"In this hierarchy, SOCAMM serves as an intermediate layer, handling frequently accessed 'hot' data and buffering workloads between HBM and system memory to reduce bottlenecks," the report quoted an industry official.

The modular form factor represents a shift from conventional LPDDR memory, which is usually soldered directly onto boards and cannot be replaced. This new design allows for more flexibility in system maintenance and design.

Collaboration with Nvidia and Future Outlook

SK hynix worked with Nvidia to tailor the module for the Vera Rubin platform, which is scheduled for launch in the second half of the year. The company also expects to supply its next-generation HBM4 memory for the same platform.

"The 192GB SOCAMM2 sets a new benchmark for AI memory performance. We will strengthen our position as a trusted AI memory solutions provider through close collaboration with global AI customers," the report quoted Kim Ju-seon, SK hynix President and head of AI Infra. (ANI)

(Except for the headline, this story has not been edited by Asianet Newsable English staff and is published from a syndicated feed.)