Many Universities and Research Centers are designing complex new hardware using High-Level Synthesis. Below are some research partners and the various designs that they have created in part by leveraging libraries from HLSLIBS.org.
FlexASR Github
Paper: A 25mm2 SoC for IoT Devices with 18ms Noise-Robust Speech-to-Text Latency via Bayesian Speech Denoising and Attention-Based Sequence-to-Sequence DNN Speech Recognition in 16nm FinFET
CHIPKIT Talk: Closing the Algorithm/Hardware Design and Verification Loop with Speed via HLS
International Solid-State Circuits Conference Slides: A 25mm2 SoC for IoT Devices with 18ms Noise-Robust
Speech-to-Text Latency via Bayesian Speech
Denoising and Attention-Based Sequence-to-Sequence
DNN Speech Recognition in 16nm FinFET
EdgeBERT Github
MICRO 2021 Paper: EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference
DNN Accelerator Code Files
2021 Symposium on VLSI Circuits Paper: CHIMERA: A 0.92 TOPS, 2.2 TOPS/W Edge AI Accelerator with
2 MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference
Webinar: Stanford University: Edge ML Accelerator SoC Design Using Catapult HLS