Patrick Schmidt, M.Sc
- Group: Prof. Becker
- Room: 226
CS 30.10 - Phone: +49 721 608-41315
- patrick schmidt2 ∂ kit edu
- Engesserstraße 5
76131 Karlsruhe
High-Level Synthesis for AI Accelerators
In order to reduce the design time of AI accelerators, the field of High-Level Synthesis has gained traction. It moves the design from traditional HDL languages to a more abstract level, such as SystemC or C++, and enable the designer to rapidly evaluate different architectures. Since AI accelerators are very data path heavy, they are a good fit for this way of modelling them, as they can easily be described algorithmically. Through these methods, designers can focus on the architecture while low-level modelling, such as pipelining and interfaces, can be handled by the tools.
Compiling Neural Networks
Recent years have shown a large amount of novel hardware designs to effectively accelerate different kinds of neural networks. However, the tooling to deploy these networks on the hardware has been lacking behind. To address this, dedicated compilers for neural networks are necessary. Specifically, MLIR is a promising tool to enable the rapid development of optimizing compiler stacks for a wide range of hardware designs. It provides a large amount of necessary infrastructure and enables the modelling of custom hardware operations.
System-Level Design Evaluation
The most critical aspect of AI accelerators is not the available compute power, but the bandwidth needed to feed the compute engines with sufficient data. Enabling a full-stack evaluation of compute platform is therefore a crucial task. To support this, Architecture Design Languages can be used to provide an abstract model of the system and to generate a simulation platform. Coupled with a compiler, this provides a powerful tool for system analysis and evaluation.
Title | Type |
---|---|
Compiler-Based Integration of Neural Network Accelerators | Masterarbeit |
Concept and development of high-performance hardware accelerators for neural networks | Bachelor/Master thesis |
Parallel result validation of AI accelerators using most neuron activation monitor | Masterarbeit |