심층 신경망을 위한 Skip 알고리즘 이용하는Convolution 가속기
A Convolution Accelerator using Skip Algorithm for Deep Neural Network
- 주제(키워드) Convolution Neural Network , Accelerator , Skip Algorithm
- 발행기관 아주대학교
- 지도교수 선우명훈
- 발행년도 2018
- 학위수여년월 2018. 2
- 학위명 석사
- 학과 및 전공 일반대학원 전자공학과
- 실제URI http://www.dcollection.net/handler/ajou/000000026938
- 본문언어 영어
- 저작권 아주대학교 논문은 저작권에 의해 보호받습니다.
초록/요약
Convolution neural networks (CNNs) are a well-known neural network architecture. It is widely used in the field of computer vision, especially in image classification and object recognition. In addition, CNNs are showing good performance in this field with the development of graphics processing unit (GPU). However, GPU has a critical problem with power consumption and energy efficiency. This makes CNNs difficult to apply real-time image processing and mobile applications. In particular, CNNs consume most of the power and time in Convolution operations. Thus, the computational complexity of convolution operations in CNNs makes it difficult to use CNN in areas where resources are limited. In this thesis, we propose an accelerator that can efficiently perform convolution operations in the CNN inference phases. Convolution is the multiplication and accumulation (MAC) of weights and feature map data. If the data of the neurons in the neural networks are zero. These neurons have little effect on neural network performance. Therefore, it is possible to obtain excellent performance in terms of inference time, energy efficiency by skipping the convolution operation by checking the neuron having zero value. Our synthesis results in 65nm technology with the clock frequency of 400Mhz. The proposed accelerator is able to reach over 207 Giga Operation per second (GOp/s) and an efficiency of 144%, achieving a power efficiency of over 473 Gop/s/W in a core area of 1.2 Mega Gate Equivalents (MGE).
more목차
I. Introduction
II. Convolution Neural Network
III. Proposed Architecture
IV. Implementation and Results
V. Conclusion
Bibliography