TRACK A: Infra & Systems: Frontiers in Memory Innovation - CXL, HBM and More | Kisaco Research

The relentless growth in the size and sophistication of AI models and data sets continues to put pressure on every aspect of AI processing systems. Advances in domain-specific architectures and hardware/software co-design have resulted in enormous increases in AI processing performance, but the industry needs even more. Memory systems and interconnects that supply data to AI processors will continue to be of critical importance, requiring additional innovation to meet the needs of future processors. Join Rambus Fellow and Distinguished Inventor, Dr. Steven Woo, as he leads a panel of technology experts in discussing the importance of improving memory and interfaces and enabling new system architectures, in the quest for greater AI/ML performance.

Session Topics: 
Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Sponsor(s): 
Rambus
Speaker(s): 

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Author:

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Author:

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.