Sparse linear algebra is a frequent bottleneck in machine learning and data mining workloads. The efficient acceleration of sparse matrix calculations becomes even more critical when applied to big data problems.
The goal is to implement an accelerator for multiplying a sparse matrix with a sparse vector. Current solutions fetch from memory all non-zero elements of the sparse matrix. The aim of this project is to implement a technique in which only the non-zero elements of the matrix that are matched by a non-zero element in the sparse vector will be fetched from memory. This will reduce memory accesses by 3-4 orders of magnitude. The architecture which can implement this technique is described in the paper below.
In this project, we will design a stand-alone matrix multiplication accelerator and analyze its performance and energy consumption. This is a research project, endeavoring into a new field of study, which may lead to further research and scientific publications.
What will we do and learn in the project?
Learn digital VLSI design tools and flow
Design a novel accelerator for machine learning
Logic design course
Desire to innovate and try new things
Ability to work independently
The project is based on the paper “Accelerator for Sparse Machine Learning” by Leonid Yavits and Ran Ginosar.
Prerequisite : Digital Systems and Computer Structure – 044252
Supervisor: Dr. Leonid Yavits
Prerequisites: Distributed Systems/Receiving and Transmitting Systems/RF-CMOS. Knowledge of Keysight Advanced Design System (ADS) – advantage.
For more information, please contact Goel Samuel Room 711 Mayer Building, tel 4668, firstname.lastname@example.org
To view the VLSI projects classified according to different VLSI areas, see VLSI lab site :