Textual Entailment
by venkat kasi

1. Stopwords Retained
1.1. LSTM
1.2. GRU
2. LSTM
2.1. Class Imbalance
2.1.1. SMOTE
2.1.2. Focal Loss
2.1.3. Class Weights
2.1.4. ADASYN
2.2. Embeddings
2.2.1. Glove 6B 300D
2.2.2. Glove 840B 300D
2.3. Hyper parameter Tuning
2.3.1. Learning Rate
2.3.1.1. ReduceonPlataeu
2.3.1.2. Adaptive Optimizer
2.3.2. Dense Layer
2.3.3. LSTM Units
2.3.4. Batch Normalization
2.3.5. Dropout
2.4. Visualization
2.4.1. wandb
2.4.2. Tensorboard
3. GRU
3.1. Class Imbalance
3.1.1. SMOTE
3.1.2. Focal Loss
3.1.3. Class Weights
3.1.4. ADASYN
3.2. Embeddings
3.2.1. Glove 6B 300D
3.2.2. Glove 950B 300D
3.3. Hyper parameter tuning
3.3.1. Learning Rate
3.3.1.1. ReduceonPlataeu
3.3.1.2. Adaptive optimizers like adam RMSPROP
3.3.2. Dense layers
3.3.3. Batch Normalization
3.3.4. Weight Decay
3.3.5. GRU Units
3.3.6. Dropout
3.4. Visualization
3.4.1. wandb
3.4.2. Tensor Board
3.5. Attention
3.5.1. Self Attention