How the Panorama of Memory is Evolving With CXL > 광고문의

본문 바로가기
사이트 내 전체검색


광고문의

광고상담문의

(054)256-0045

평일 AM 09:00~PM 20:00

토요일 AM 09:00~PM 18:00

광고문의
Home > 광고문의 > 광고문의

How the Panorama of Memory is Evolving With CXL

페이지 정보

작성자 YG 작성일25-11-06 16:56 (수정:25-11-06 16:56)

본문

연락처 : YG 이메일 : jarredspell@gmail.com

colorful-scattered-chalks-casting-shadows-on-a-surface..jpgAs datasets grow from megabytes to terabytes to petabytes, the cost of moving information from the block storage devices throughout interconnects into system memory, performing computation and then storing the massive dataset back to persistent storage is rising in terms of time and power (watts). Additionally, heterogeneous computing hardware more and more needs access to the same datasets. For example, a basic-goal CPU could also be used for assembling and preprocessing a dataset and memory improvement solution scheduling tasks, however a specialised compute engine (like a GPU) is much sooner at coaching an AI mannequin. A extra environment friendly resolution is needed that reduces the switch of giant datasets from storage on to processor-accessible memory. Several organizations have pushed the business towards solutions to those problems by maintaining the datasets in large, byte-addressable, sharable memory. Within the nineteen nineties, the scalable coherent interface (SCI) allowed multiple CPUs to access memory in a coherent means inside a system. The heterogeneous system architecture (HSA)1 specification allowed memory sharing between devices of different types on the identical bus.

2dyVO3C.jpg

In the decade starting in 2010, the Gen-Z commonplace delivered a memory-semantic bus protocol with high bandwidth and low latency with coherency. These efforts culminated in the widely adopted Compute Categorical Link (CXLTM) normal getting used as we speak. Because the formation of the Compute Categorical Hyperlink (CXL) consortium, Micron has been and stays an active contributor. Compute Express Link opens the door for saving time and energy. The new CXL 3.1 commonplace allows for byte-addressable, load-store-accessible memory like DRAM to be shared between completely different hosts over a low-latency, high-bandwidth interface utilizing industry-commonplace parts. This sharing opens new doorways beforehand solely doable through costly, proprietary tools. With shared memory systems, the data may be loaded into shared memory once after which processed a number of instances by a number of hosts and accelerators in a pipeline, without incurring the price of copying information to native memory, block storage protocols and latency. Moreover, some network information transfers might be eradicated.



For example, data can be ingested and stored in shared memory over time by a host related to a sensor array. Once resident in memory, a second host optimized for this function can clear and preprocess the information, followed by a third host processing the data. In the meantime, the first host has been ingesting a second dataset. The only data that needs to be handed between the hosts is a message pointing to the information to point it is ready for processing. The big dataset by no means has to move or be copied, saving bandwidth, energy and memory house. Another instance of zero-copy information sharing is a producer-client information model the place a single host is answerable for amassing data in Memory Wave, and then multiple other hosts consume the info after it’s written. As earlier than, the producer simply must send a message pointing to the address of the information, signaling the opposite hosts that it’s prepared for consumption.



Zero-copy knowledge sharing could be additional enhanced by CXL memory modules having built-in processing capabilities. For example, if a CXL memory module can carry out a repetitive mathematical operation or data transformation on a knowledge object totally within the module, system bandwidth and power can be saved. These financial savings are achieved by commanding the memory module to execute the operation with out the info ever leaving the module utilizing a functionality referred to as close to memory compute (NMC). Moreover, the low-latency CXL fabric will be leveraged to ship messages with low overhead very quickly from one host to another, between hosts and memory modules, or between memory modules. These connections can be used to synchronize steps and share pointers between producers and consumers. Beyond NMC and communication benefits, superior memory telemetry could be added to CXL modules to offer a new window into real-world utility site visitors within the shared devices2 with out burdening the host processors.



With the insights gained, operating techniques and administration software can optimize information placement (memory tiering) and tune different system parameters to meet working targets, from efficiency to vitality consumption. Extra memory-intensive, value-add features equivalent to transactions are additionally ideally suited to NMC. Micron is excited to combine massive, scale-out CXL world shared memory improvement solution and enhanced memory features into our memory lake idea. As datasets develop from megabytes to terabytes to petabytes, the price of moving data from the block storage gadgets throughout interconnects into system memory, performing computation after which storing the big dataset again to persistent storage is rising in terms of time and energy (watts). Additionally, heterogeneous computing hardware more and more needs entry to the same datasets. For example, a common-objective CPU could also be used for assembling and preprocessing a dataset and scheduling duties, however a specialized compute engine (like a GPU) is way sooner at training an AI model.

댓글목록

등록된 댓글이 없습니다.


회사소개 광고문의 기사제보 독자투고 개인정보취급방침 서비스이용약관 이메일무단수집거부 청소년 보호정책 저작권 보호정책

법인명 : 주식회사 데일리광장 | 대표자 : 나종운 | 발행인/편집인 : 나종운 | 사업자등록번호 : 480-86-03304 | 인터넷신문 등록번호 : 경북, 아00826
등록일 : 2025년 3월 18일 | 발행일 : 2025년 3월 18일 | TEL: (054)256-0045 | FAX: (054)256-0045 | 본사 : 경북 포항시 남구 송림로4

Copyright © 데일리광장. All rights reserved.