2019 AMC

马上开始. 它是免费的哦
注册 使用您的电邮地址
2019 AMC 作者: Mind Map: 2019 AMC

1. Methodology

1.1. The Truncated Normal Distribution

1.1.1. https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf

1.2. deep deterministic policy gradient (DDPG)

2. Abstract

2.1. trade off among model size, speed, and accuracy. This paper leverage reinforcement learning to provide the model compresssion policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor.

3. Introduction

3.1. Core of Model compression

3.1.1. determine the comopression policy for each layer

3.2. rule-base

3.2.1. disadvantage

3.2.1.1. non-optimal

3.2.1.2. doesn't transfer from one model to another model.

3.3. motivation

3.3.1. NN are evolving fast, we need an automated way to compressthem to improve engineer efficiency.

3.3.2. NN becomes deeper , the design space has exponential complexity.