Iniziamo. È gratuito!
o registrati con il tuo indirizzo email
Deep Learning da Mind Map: Deep Learning

1. CNN

1.1. Convolution layer

1.1.1. Kenel: 2x2 or 3x3

1.2. Pooling layer

1.2.1. Max Pooling

1.2.2. Average Pooling

1.3. Fully Connected layer

2. AlexNet

2.1. Winner of ImageNet LSVRC-2012

2.2. Archaitecture

2.2.1. 5 convolutional layers

2.2.2. 3 fully-connected layers

2.2.3. Sofmax

2.3. Pooling

2.3.1. Overlapping pooling

2.3.2. Window: 3x3, Stride: 2

2.4. Activation function

2.4.1. ReLU

2.5. Data augmentation

2.5.1. Image translation & horizontal reflection

2.5.2. Altering intensities of RGB channels

2.6. Dropout

2.6.1. p = 0.5

2.6.2. Used in the first two fully-connected layers

2.7. Training

2.7.1. Batch size: 128

2.7.1.1. mini-batch gradient descent

2.7.1.2. Weight decay: 0.0005

2.7.1.3. Momentum: 0.9

3. LSTM

3.1. Upgraded version of RNN

3.2. Cell state

3.3. Gate

3.4. Forget gate

3.5. Input gate

3.6. Cell gate update

4. RNN

4.1. Recurrent neural network

4.1.1. Information cycles through a loop

4.1.2. Considers current input and what has been learned

4.1.3. Has a short-term memory

4.1.4. Sequence of data contains crucial information about what is coming next

4.2. When to use a RNN?

4.2.1. Sequence of data

4.2.2. Temporal dynamics connecting the data is more important than spatial content of each individual frame

4.3. Type of model

4.3.1. One to One

4.3.2. One to many

4.3.3. Many to one

4.3.4. Many to many

4.4. New state

4.4.1. ht=(fw*(h(t-1),xt)

4.5. Output

4.5.1. yt = Softmax(Why*ht+by)