Arcana Regularizations
Submodules
arcana.regularizations.optimizer_scheduler module
Optimizer and Scheduler Factory
- class arcana.regularizations.optimizer_scheduler.SchedulerFactory(optimizer, model_config, len_train_loader=None)
Bases:
object
Factory class for the scheduler
- get_scheduler(learning_rate_type)
Get the scheduler
- Parameters:
learning_rate_type (str) – learning rate type
- Returns:
torch.optim – scheduler of the given type
- Raises:
ValueError – if the learning rate type is unknown
arcana.regularizations.stopping_criteria module
Early stopping class
- class arcana.regularizations.stopping_criteria.EarlyStopping(criterion_rule='PQ', training_strip=10, alpha=0.1, patience=4)
Bases:
object
Early stopping class
- step(train_loss, val_loss, epoch)
Step the early stopping by three criteria: GL, UP, PQ or PQUP - GL: generalization loss - UP: validation loss is increasing - PQ: progress quotient - PQUP: combination of PQ and UP
- Parameters:
train_loss (float) – training loss
val_loss (float) – validation loss
epoch (int) – epoch number
- Returns:
bool – True if early stopping criteria is met
arcana.regularizations.teacher_forcing module
Teacher forcing scheduler class
- class arcana.regularizations.teacher_forcing.TeacherForcingScheduler(num_epochs, epoch_division, seq_start, seq_end, start_learning_ratio=0.5, end_learning_ratio=0.005, decay_stride=0.3)
Bases:
object
Teacher forcing scheduler class
- get_ratio()
Get the teacher forcing ratio. It is implemented as a linear decay function which is defined as: teacher_forcing_ratio = initial_teacher_forcing_ratio * exp(-decay_parameter * epoch) and it is bounded between 0 and 1 The higher teacher forcing ratio is used in the beginning of the training and it is gradually decreased to 0
- Returns:
list – list of teacher forcing ratios
available_seq – list of available sequences