In-Memory Computing SoC with Multi-level RRAM to Accelerate AI Inference

TetraMem will introduce its multi-level RRAM cell for in-memory computing. The talk will explain how TetraMem uses Multi-level RRAM to accelerate neural network inference applications. The speaker will demonstrate how TetraMem leverages its unique technology and expertise to increase precision, accuracy and energy efficiency of AI applications, including our methods to improve cell performance as recently published in Nature and Science.
Speaker: Wenbo Yin, TetraMem Inc
Attend in person or online (see weblink)
Wednesday, 07/16/25
Contact:
Website: Click to VisitCost:
FreeSave this Event:
iCalendarGoogle Calendar
Yahoo! Calendar
Windows Live Calendar
