Systems Engineering | Page 3 | Kisaco Research

Systems Engineering

Color: 
#1ed111

Deep neural networks (DNNs), a subset of machine learning (ML), provide a foundation for automating conversational artificial intelligence (CAI) applications. FPGAs provide hardware acceleration enabling high-density and low latency CAI. In this presentation, we will provide an overview of CAI, data center use-cases, describe the traditional compute model and its limitations and show how an ML compute engine integrated into the Achronix FPGA can lead to 90% cost reductions for speech transcription.

 

Enterprise AI
NLP
Novel AI Hardware
ML at Scale
Data Science
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Salvador Alvarez

Senior Manager, Product Planning
Achronix
  • Salvador Alvarez is the Senior Manager of Product Planning at Achronix, coordinating the research, development, and launch of new Achronix products and solutions. With over 20 years of experience in product growth, roadmap development, and competitive intelligence and analysis in the semiconductor, automotive, and edge AI industries, Sal Alvarez is a recognized expert in helping customers realize the advantages of edge AI and deep learning technology over legacy cloud AI approaches. Sal holds a B.S. in computer science and electrical engineering from the Massachusetts Institute of Technology.​

Salvador Alvarez

Senior Manager, Product Planning
Achronix
  • Salvador Alvarez is the Senior Manager of Product Planning at Achronix, coordinating the research, development, and launch of new Achronix products and solutions. With over 20 years of experience in product growth, roadmap development, and competitive intelligence and analysis in the semiconductor, automotive, and edge AI industries, Sal Alvarez is a recognized expert in helping customers realize the advantages of edge AI and deep learning technology over legacy cloud AI approaches. Sal holds a B.S. in computer science and electrical engineering from the Massachusetts Institute of Technology.​

As AI makes its way into healthcare and medical applications, the role of hardware accelerators in the successful deployment of such large AI models becomes more and more important. Nowadays large language models, such as GPT-3 and T5, offer unprecedented opportunities to solve challenging healthcare business problems like drug discovery, medical term mapping and insight generation from electronic health records. However, efficient and cost effective training, as well as deployment and maintenance of such models in production remains a challenge for healthcare industry. This presentation will review a few open challenges and opportunities in the healthcare industry and the benefits that AI hardware innovation may bring to the ML utilization.

Developer Efficiency
Enterprise AI
ML at Scale
NLP
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Hooman Sedghamiz

Senior Director of AI & ML
Bayer

Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.

He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.

His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.

Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.

Hooman Sedghamiz

Senior Director of AI & ML
Bayer

Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.

He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.

His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.

Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.

One of the biggest challenges in the US is managing the cost of healthcare.  Although we have high healthcare costs in the US, our life expectancy is still average.  In this talk we will look at some of the core causes of healthcare costs and what modern AI hardware can do to lower these costs.  We will see that faster and bigger GPUs alone will not save us.  We need detailed models to across a wide swath of our communities and perform early interventions.  We need accurate models of our world and the ability to simulate the impact of policy changes to overall healthcare costs.  We need new MIMD hardware with cores and memory architecture that keep cores fed with the right data.

Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Systems Engineering

Author:

Dan McCreary

Distinguished Engineer, Graph & AI
Optum

Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.

Dan McCreary

Distinguished Engineer, Graph & AI
Optum

Dan is a distinguished engineer in AI working on innovative database architectures including document and graph databases. He has a strong background in semantics, ontologies, NLP and search. He is a hands-on architect and like to build his own pilot applications using new technologies. Dan started the NoSQL Now! Conference (now called the Database Now! Conferences). He also co-authored the book Making Sense of NoSQL, one of the highest rated books on Amazon on the topic of NoSQL. Dan worked at Bell Labs as a VLSI circuit designer where he worked with Brian Kernighan (of K&R C). Dan also worked with Steve Jobs at NeXT Computer.

Chip Design
Edge AI
Enterprise AI
Novel AI Hardware
Systems Design
Hardware Engineering
Software Engineering
Systems Engineering

Author:

Harshit Khaitan

Director, AI Accelerators
Meta

Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Harshit Khaitan

Director, AI Accelerators
Meta

Harshit Khaitan is the Director of AI Accelerator at Meta where he leads building AI Accelerators for Reality labs products. Prior to Meta, he was technical lead and co-founder for the Edge Machine learning accelerators at Google, responsible for MLA in Google Pixel 4 (Neural Core) and Pixel 6 (Google Tensor SoC). He has also held individual and technical leadership positions at Google’s first Cloud TPU, Nvidia Tegra SoCs and Nvidia GPUs. He has 10+ US and international patents in On-device AI acceleration. He has a Master’s degree in Computer Engineering from North Carolina State University and a Bachelor’s degree in Electrical Engineering from Manipal Institute of Technology, India.

Chip Design
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Nitza Basoco

VP, Business Development
proteanTecs

Nitza Basoco is a technology leader with over 20 years of semiconductor experience. At proteanTecs, she leads the Business Development team, responsible for driving partnership strategies and building value-add ecosystem growth. 

Previously, Nitza was the VP of Operations at Synaptics with responsibility for growing and scaling their worldwide test development, product engineering and manufacturing departments. Prior to Synaptics, Nitza spent a decade holding various leadership positions within the operations organization at MaxLinear, ranging from test development engineering to supply chain. Earlier in her career, Nitza served as a Principal Test Development Engineer for Broadcom Corporation and as a Broadband Applications Engineer at Teradyne.  

Nitza holds MEng and BSEE degrees from Massachusetts Institute of Technology.

Nitza Basoco

VP, Business Development
proteanTecs

Nitza Basoco is a technology leader with over 20 years of semiconductor experience. At proteanTecs, she leads the Business Development team, responsible for driving partnership strategies and building value-add ecosystem growth. 

Previously, Nitza was the VP of Operations at Synaptics with responsibility for growing and scaling their worldwide test development, product engineering and manufacturing departments. Prior to Synaptics, Nitza spent a decade holding various leadership positions within the operations organization at MaxLinear, ranging from test development engineering to supply chain. Earlier in her career, Nitza served as a Principal Test Development Engineer for Broadcom Corporation and as a Broadband Applications Engineer at Teradyne.  

Nitza holds MEng and BSEE degrees from Massachusetts Institute of Technology.

Author:

Judy Priest

Distinguished Engineer & VP, GM
Microsoft

Judy Priest is a Distinguished Engineer in Microsoft's Cloud and AI Group. She drives innovation, integration, and operations in next generation Data Center platforms supporting Azure, AI, and MS's Enterprise software. Judy has over 25 years of experience in developing data centers systems and silicon, high speed signaling technologies and optics, circuit design, and physical architectures for compute, storage, graphics, and networking.

Judy has previously worked at Cisco Systems, Silicon Graphics, Hewlett-Packard, and Digital Equipment Corporation, as well as two startup ventures. She serves on the Board of Directors for Women's Audio Mission, a local SF nonprofit moving the needle for girls, women, and GNC individuals in STEM through music. Judy was awarded Business Insider's 2018 Most Powerful Female Engineers and InterCon Networking's 2020 Top 100 Leaders in Engineering.

 

Judy Priest

Distinguished Engineer & VP, GM
Microsoft

Judy Priest is a Distinguished Engineer in Microsoft's Cloud and AI Group. She drives innovation, integration, and operations in next generation Data Center platforms supporting Azure, AI, and MS's Enterprise software. Judy has over 25 years of experience in developing data centers systems and silicon, high speed signaling technologies and optics, circuit design, and physical architectures for compute, storage, graphics, and networking.

Judy has previously worked at Cisco Systems, Silicon Graphics, Hewlett-Packard, and Digital Equipment Corporation, as well as two startup ventures. She serves on the Board of Directors for Women's Audio Mission, a local SF nonprofit moving the needle for girls, women, and GNC individuals in STEM through music. Judy was awarded Business Insider's 2018 Most Powerful Female Engineers and InterCon Networking's 2020 Top 100 Leaders in Engineering.

 

Author:

Shivam Bharuka

Software Production Engineer
Meta

Shivam is an engineering leader with Meta as part of the AI Infrastructure team for the last three years. During this time, he has helped scale the machine learning training infrastructure at Meta to support large scale ranking and recommendation models, serving more than a billion users. He is responsible for driving performance, reliability, and efficiency-oriented designs across the components of the ML training stack at Meta. Shivam holds a B.S. and an M.S. in Computer Engineering from the University of Illinois at Urbana-Champaign.

Shivam Bharuka

Software Production Engineer
Meta

Shivam is an engineering leader with Meta as part of the AI Infrastructure team for the last three years. During this time, he has helped scale the machine learning training infrastructure at Meta to support large scale ranking and recommendation models, serving more than a billion users. He is responsible for driving performance, reliability, and efficiency-oriented designs across the components of the ML training stack at Meta. Shivam holds a B.S. and an M.S. in Computer Engineering from the University of Illinois at Urbana-Champaign.

Author:

Jim von Bergen

Senior Director, Product Quality Engineering
Cisco

Jim von Bergen

Senior Director, Product Quality Engineering
Cisco
Enterprise AI
ML at Scale
Systems Design
Data Science
Software Engineering
Strategy
Systems Engineering

Author:

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

Daniel Wu

Strategic AI Leadership | Keynote Speaker | Educator | Entrepreneur Course Facilitator
Stanford University AI Professional Program

Daniel Wu is an accomplished technical leader with over 20 years of expertise in software engineering, AI/ML, and team development. With a diverse career spanning technology, education, finance, and healthcare, he is credited for establishing high-performing AI teams, pioneering point-of-care expert systems, co-founding a successful online personal finance marketplace, and leading the development of an innovative online real estate brokerage platform. Passionate about technology democratization and ethical AI practices, Daniel actively promotes these principles through involvement in computer science and AI/ML education programs. A sought-after speaker, he shares insights and experiences at international conferences and corporate events. Daniel holds a computer science degree from Stanford University.

Edge AI
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Data Science
Software Engineering
Systems Engineering

Author:

Girish Venkataramani

Senior Director, ML Accelerators
Cruise

Girish Venkataramani

Senior Director, ML Accelerators
Cruise

Author:

Ravi Narayanaswami

Principal MLA Architect
Cruise

Ravi Narayanaswami

Principal MLA Architect
Cruise

Author:

Prasun Raha

Head of HW Platform Architecture
Rivian

Prasun Raha

Head of HW Platform Architecture
Rivian

Author:

Nikunj Kotecha

Solutions Architect
BrainChip

Nikunj Kotecha is a Machine Learning Solutions Architect at BrainChip Inc. Currently, he works on developing and optimizing Machine Learning algorithms for the AkidaTM neuromorphic hardware. He also demonstrating capabilities of AkidaTM to client and supports with their neuromorphic solutions for AkidaTM. He has a Master of Science in Computer Science, where he specialized in concepts of Artificial intelligence with Deep Learning algorithms. At the time, he was a part of the Machine Learning lab and has published technical papers, supported research into different avenues of AI. He published research on Cross-Modal Fusion with Transformer architecture for Sign Language translation during the completion of his Masters. He has also worked at Oracle, where he build and integrated Machine Learning solutions to provide operational benefits of using Oracle Clinical Trial software.

Nikunj Kotecha

Solutions Architect
BrainChip

Nikunj Kotecha is a Machine Learning Solutions Architect at BrainChip Inc. Currently, he works on developing and optimizing Machine Learning algorithms for the AkidaTM neuromorphic hardware. He also demonstrating capabilities of AkidaTM to client and supports with their neuromorphic solutions for AkidaTM. He has a Master of Science in Computer Science, where he specialized in concepts of Artificial intelligence with Deep Learning algorithms. At the time, he was a part of the Machine Learning lab and has published technical papers, supported research into different avenues of AI. He published research on Cross-Modal Fusion with Transformer architecture for Sign Language translation during the completion of his Masters. He has also worked at Oracle, where he build and integrated Machine Learning solutions to provide operational benefits of using Oracle Clinical Trial software.

Author:

David Tai

Senior Staff Engineer
DiDi Autonomous Driving

David Tai

Senior Staff Engineer
DiDi Autonomous Driving

The relentless growth in the size and sophistication of AI models and data sets continues to put pressure on every aspect of AI processing systems. Advances in domain-specific architectures and hardware/software co-design have resulted in enormous increases in AI processing performance, but the industry needs even more. Memory systems and interconnects that supply data to AI processors will continue to be of critical importance, requiring additional innovation to meet the needs of future processors. Join Rambus Fellow and Distinguished Inventor, Dr. Steven Woo, as he leads a panel of technology experts in discussing the importance of improving memory and interfaces and enabling new system architectures, in the quest for greater AI/ML performance.

Chip Design
Enterprise AI
ML at Scale
Novel AI Hardware
Systems Design
Hardware Engineering
Strategy
Systems Engineering

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Author:

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Sumti Jairath

Chief Architect
SambaNova Systems

Sumti Jairath is Chief Architect at SambaNova Systems, with expertise in hardware-software co-design. Sumti worked on PA-RISC-based Superdome servers back at HP, followed by several generations of SPARC CMT processors at Sun Microsystems and Oracle. At Oracle, Sumti worked on SQL, Data-analytics and Machine Learning acceleration in SPARC processors. Sumti holds 27 patents in computer architecture and hardware-software co-design.

Author:

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.

Matt Fyles

SVP, Software
Graphcore

Matt Fyles is a computer scientist with over 20 years of proven experience in the design, delivery and the support of software and hardware within the microprocessor market. As SVP Software at Graphcore, Matt has built the company’s Poplar software stack from scratch, co-designed with the IPU for machine intelligence. He currently oversees the Software team’s work on the Poplar SDK, helping to support Graphcore’s growing community of developers.