一、论文题目
二、摘要
在计算机视觉和语音识别方面,深度神经网络(DNN)已经被广泛认为是一种非常有前景的解决方案,并且正在成为众多其他人工智能
应用领域的计算基础。然而,这些算法的计算复杂度和对高能效的需求导致了对专用硬件加速器研究的激增。为了减少访问DRAM的延迟和功率消耗,大多数的DNN加速器本质上是以空间换时间,通过扩展数百个处理元件(PE)并行操作并且彼此之间直接通信。
DNN的发展是日新月异的,并且在最近的网络结构中,大多同时包括卷积层、递归层、池化层和全连接层等,且具有不同的输入大小和滤波器尺寸。它们可能是稠密的或稀疏的。它们还可以以多种方式(层内和跨层)进行分块,以获得数据的重用(权重和中间输出)。这些计算特征都可能导致不同的加速器数据流模式。
不幸的是,大多数的DNN加速器仅支持固定的数据流模式,因为它们对PE和片上网络(NoC)进行了精细的协同设计,以期达到最优的性能功耗比。实际上,它们中的大多数仅针对卷积层内的数据流进行了优化。这使得在结构上有效地映射任意数据流变得极具挑战性,并且可能导致可用计算资源的利用率极低。
DNN加速器需要可编程以实现大规模部署。要使它们可编程,它们需要在内部进行重构,以支持可以映射到加速器上的各种数据流模式。为了满足这一需求,我们提供了MAERI,它是一个DNN加速器,内置一组模块化和可配置的构建块,可以通过适当配置互联架构轻松支持无数DNN分区和映射。 MAERI使用刚性NoC结构,在基准测试上的多个数据流映射中提供了8-459%的资源利用率提升。
(MAERI的不同功能模块)
(不同架构下面积和功耗对比)
(3x3x3kernel和5x5x3input的映射实例)
Abstract
Deep neural networks (DNN) have demonstrated highly promising results across computer vision and speech recognition, and are becoming foundational for ubiquitous AI. The computational complexity of these algorithms and a need for high energy-efficiency has led to a surge in research on hardware accelerators. To reduce the latency and energy costs of accessing DRAM, most DNN accelerators are spatial in nature, with hundreds of processing elements (PE) operating in parallel and communicating with each other directly. DNNs are evolving at a rapid rate, and it is common to have convolution, recurrent, pooling, and fully-connected layers with varying input and filter sizes in the most recent topologies.They may be dense or sparse. They can also be partitioned in myriad ways (within and across layers) to exploit data reuse (weights and intermediate outputs). All of the above can lead to different dataflow patterns within the accelerator substrate. Unfortunately, most DNN accelerators support only fixed dataflow patterns internally as they perform a careful co-design of the PEs and the network-on-chip (NoC). In fact, the majority of them are only optimized for traffic within a convolutional layer. This makes it challenging to map arbitrary dataflows on the fabric efficiently, and can lead to underutilization of the available compute resources. DNN accelerators need to be programmable to enable mass deployment. For them to be programmable, they need to be configurable internally to support the various dataflow patterns that could be mapped over them. To address this need, we present MAERI, which is a DNN accelerator built with a set of modular and configurable building blocks that can easily support myriad DNN partitions and mappings by appropriately configuring tiny switches. MAERI provides 8-459% better utilization across multiple dataflow mappings over baselines with rigid NoC fabrics.
如果你对本文感兴趣,想要下载完整文章进行阅读,可以关注【AI食堂】公众号(AIStation)。