Sparse linear algebra is a frequent bottleneck in machine learning and data mining workloads. The efficient acceleration of sparse matrix calculations becomes even more critical when applied to big data problems.

The goal is to implement an accelerator for multiplying a sparse matrix with a sparse vector. Current solutions fetch from memory all non-zero elements of the sparse matrix. The aim of this project is to implement a technique in which only the non-zero elements of the matrix that are matched by a non-zero element in the sparse vector will be fetched from memory. This will reduce memory accesses by 3-4 orders of magnitude. The architecture which can implement this technique is described in the paper below.
this project, we will design a stand-alone matrix multiplication accelerator and analyze its performance and energy consumption. This is a research project, endeavoring into a new field of study, which may lead to further research and scientific publications.
What will we do and learn in the project?
- Learn digital VLSI design tools and flow
- Design a novel accelerator for machine learning
Requirements
- Logic design course
- Desire to innovate and try new things
- Ability to work independently
The project is based on the paper “Accelerator for Sparse Machine Learning” by Leonid Yavits and Ran Ginosar.
Prerequisites: Logic Design (044262)