Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | Next revisionBoth sides next revision | ||
aira:start [2022/06/21 11:49] – [Schedule] sbk | aira:start [2022/06/21 11:50] – [2022-06-02] sbk | ||
---|---|---|---|
Line 112: | Line 112: | ||
* **2021.10.14** [[http:// | * **2021.10.14** [[http:// | ||
* Meeting link: **[[https:// | * Meeting link: **[[https:// | ||
+ | |||
+ | |||
+ | ==== 2022-06-23 ==== | ||
+ | <WRAP column 15%> | ||
+ | {{ : | ||
+ | |||
+ | </ | ||
+ | |||
+ | <WRAP column 75%> | ||
+ | **Speaker**: | ||
+ | |||
+ | **Title**: AI inference acceleration on FPGA | ||
+ | |||
+ | |||
+ | **Abstract**: | ||
+ | Artificial intelligence and neural networks are constantly applied in all sorts of tasks involving data processing. Emerging models are deployed on different kinds of computing platforms: from edge computing, through the cloud, and ending with HPC. | ||
+ | Modern FPGA-based accelerators and SoCs aim to fulfill different needs in all of the above levels such as high throughput, low latency, high energy efficiency, and flexibility. | ||
+ | New software stacks are developed to expose high-level interfaces for preparing, optimizing, and deploying existing or custom, implemented in standard machine learning frameworks such as PyTorch or TensorFlow, models on FPGA devices. | ||
+ | In this talk, I will present modern solutions for accelerating the inference of neural network models on FPGAs as well as examples of usage done by us at the JU and from others. | ||
+ | |||
+ | |||
+ | **Biograms**: | ||
+ | **Bartosz Soból** is a first-year Ph.D. student in Technical Computer Science at Jagiellonian University. He holds a BSc in Computer Mathematics and MSc in Computer Science from Jagiellonian University. | ||
+ | Currently, he is a member of PANDA (FAIR, GSI) collaboration where he conducts research on particle tracking algorithms and heterogeneous online processing of experimental data. His professional interests include high-performance computing, software optimization for heterogeneous systems, and CPU-GPU-FPGA interoperability. | ||
+ | |||
+ | |||
+ | </ | ||
+ | <WRAP clear></ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||