A group in Intel is working on x86 test content optimization and creation using ML techniques.
A working solution already exists for test content optimization in production mode. The next stage of the project is to create new content automatically by learning from legacy content (since x86 is backward compatible, huge legacy is available to learn from).
Test optimization refers to the compilation of a test suit that achieves the validation targets as efficiently as possible, i.e. using minimal compute, time, etc. Generally speaking, this is done by selecting the input parameters and directives for test generators using ML methods.
Test creation using ML methods refers to the generation of the assembly code. That is, we train ML models to sequentially decide which instruction (or instructions sequence) to put after another to meet some validation target. Here, we use deep reinforcement learning.
A working flow for the test creation has already been implemented. The goal of this project is to change the ML algorithm framework to a different one and compare the results with the existing one.
Prerequisites: Logic Design.
The students should be familiar with x86 and with ML, specifically Deep Learning and Deep Reinforcement Learning. Experience in python and the tensorflow package is a strong advantage.
Supervisor: Zohar Feldman and Dorit Ben-Aroya (Intel)