pyMSDtorch documentation¶
Welcome to the pyMSDtorch documentation, a collection of routines for machine learning in scientific imaging. PyMSDtorch offers a friendly API for the easy and flexible deployment of custom deep neural networks. At its core, pyMSDtorch leverages PyTorch, an open source machine learning framework maintained by the Linux Foundation.
The pyMSDtorch libraries were originally intended to provide a platform to test the Mixed-Scale Dense Convolutional Neural Network architecture (MSDNet with accompanying PyCUDA implementation) and extend its usage to bioimaging and x-ray scattering applications. MSDNets are a parameter-lean alternative to U-Nets that remain robust to overfitting and have shown to provide a competitive performance over UNets, particularly when training data is limited.
Over the past years, these libraries have grown into a collection of modules for different interchangeable approaches. The pyMSDtorch libraries now contain, apart from MSDNets, Tuneable U-Nets, Tuneable U-Net3+, and randomized Sparse Mixed Scale networks (SMSNet). A uniform interface and a variety of utility scripts for training provides one with convenient options to change network and network architectures at will and find the one most suitable for your application. In addition to routines for segmentation or denoising, our SMSNets can be used in Autoencoders, image classification, and a variety of ensemble methods. We provide tutorial notebooks for most of these use cases. The development of pyMSDtorch is funded by a Laboratory Directed Research and Development grant and by DOE-ASCR/BES funds.
Welcome to pyMSDtorch’s documentation!¶
pyMSDtorch provides easy access to a number of segmentation and denoising methods using convolution neural networks. The tools available are build for microscopy and synchrotron-imaging/scattering data in mind, but can be used elsewhere as well.
The easiest way to start playing with the code is to install pyMSDtorch and perform denoising/segmenting using custom neural networks in our tutorial notebooks located in the pyMSDtorch/tutorials folder, or perform multi-class segmentation in Gaussian noise on google colab
Install pyMSDtorch¶
We offer several methods for installation.
pip: Python package installer¶
The latest stable release may be installed with:
$ pip install pymsdtorch .
From source¶
pyMSDtorch may be directly downloaded and installed into your machine by cloning the public repository into an empty directory using:
$ git clone https://bitbucket.org/berkeleylab/pymsdtorch.git .
Once cloned, move to the newly minted pymsdtorch directory and install pyMSDtorch using:
$ cd pymsdtorch
$ pip install -e .
Tutorials only¶
To download only the tutorials in a new folder, use the following terminal input for a sparse git checkout:
$ mkdir pymsdtorchTutorials
$ cd pymsdtorchTutorials
$ git init
$ git config core.sparseCheckout true
$ git remote add -f origin https://bitbucket.org/berkeleylab/pymsdtorch.git
$ echo "pyMSDtorch/tutorials/*" > .git/info/sparse-checkout
$ git checkout main
Network Initialization¶
We start with some basic imports - we import a network and some training scripts:
from pymsdtorch.core.networks import MSDNet
from pymsdtorch.core import train_scripts
Mixed-Scale dense networks (MSDNet)¶
A plain 2d mixed-scale dense network is constructed as follows:
from torch import nn
netMSD2D = MSDNet.MixedScaleDenseNetwork(in_channels=1,
out_channels=1,
num_layers=20,
max_dilation=10,
activation=nn.ReLU(),
normalization=nn.BatchNorm2d,
convolution=nn.Conv2d)
while 3D network types for volumetric images can be built passing in equivalent kernels:
from torch import nn
netMSD3D = MSDNet.MixedScaleDenseNetwork(in_channels=1,
out_channels=1,
num_layers=20,
max_dilation=10,
activation=nn.ReLU(),
normalization=nn.BatchNorm3d,
convolution=nn.Conv3d)
Sparse mixed-scale dense network (SMSNet)¶
The pyMSDtorch suite also provides ways and means to build random, sparse mixed scale networks. SMSNets contain more sparsely connected nodes than a standard MSDNet and are useful to alleviate overfitting and multi-network aggregation. Controlling sparsity is possible, see full documentation for more details.
from pymsdtorch.core.networks import SMSNet
netSMS = SMSNet.random_SMS_network(in_channels=1,
out_channels=1,
layers=20,
dilation_choices=[1,2,4,8],
hidden_out_channels=[1,2,3])
Tunable U-Nets¶
An alternative network choice is to construct a UNet. Classic U-Nets can easily explode in the number of parameters it requires; here we make it a bit easier to tune desired architecture-governing parameters:
from pymsdtorch.core.networks import TUNet
netTUNet = TUNet.TUNet(image_shape=(121,189),
in_channels=1,
out_channels=4,
base_channels=4,
depth=3,
growth_rate=1.5)
Training¶
If your data loaders are constructed, the training of these networks is as simple as defining a torch.nn optimizer, and calling the training script:
from torch import optim, nn
from pyMSDtorch.core import helpers
criterion = nn.CrossEntropyLoss() # For segmenting
optimizer = optim.Adam(netTUNet.parameters(), lr=1e-2)
device = helpers.get_device()
netTUNet = netTUNet.to(device)
netTUNet, results = train_scripts.train_segmentation(net=netTUNet,
trainloader=train_loader,
validationloader=test_loader,
NUM_EPOCHS=epochs,
criterion=criterion,
optimizer=optimizer,
device=device,
show=1)
The output of the training scripts is the trained network and a dictionary with training losses and evaluationmetrics. You can view them as follows:
from pyMSDtorch.viz_tools import plots
fig = plots.plot_training_results_segmentation(results)
fig.show()
Saving and loading models¶
Once a model is trained, PyTorch offers two methods for saving and loading models for inference. We walk through these options using the TUNet class above.
Saving model weights (recommended)¶
For the most flexibility in restoring models for later use, we save the model’s learned weights and biases with to a specific path with:
torch.save(modelTUNet.state_dict(), PATH) .
A new TUNet model is then instantiated with the same architecture-governing parameters (image_shape, in_channels,etc.) and the learned weights are mapped back to the freshly-created model with:
newTUNet = TUNet.TUNet(*args)
newTUNet.load_state_dict(torch.load(PATH)) .
Saving the entire model¶
Alternatively, the entire model may be saved (pickled) using
torch.save(modelTUNet, PATH)
and loaded with
newTUNet = torch.load(PATH) .
Though more intuitive, this method is more prone to breaking, especially when modifying or truncating layers.
Tutorials¶
Segmention demo in 2D using Mixed-scale Dense Networks and Tunable U-Nets¶
Authors: Eric Roberts and Petrus Zwart
E-mail: PHZwart@lbl.gov, EJRoberts@lbl.gov ___
This notebook highlights some basic functionality with the pyMSDtorch package.
Using the pyMSDtorch framework, we initialize two convolutional neural networks, a mixed-scale dense network (MSDNet) and a tunable U-Net (TUNet), and train both networks to perform multi-class segmentation on noisy data. ___
Installation and imports¶
Install pyMSDtorch¶
To install pyMSDtorch, clone the public repository into an empty directory using:
$ git clone https://bitbucket.org/berkeleylab/pymsdtorch.git .
Once cloned, move to the newly minted pymsdtorch directory and install using:
$ cd pymsdtorch
$ pip install -e .
Imports¶
[1]:
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import TensorDataset, DataLoader
from pyMSDtorch.core import helpers, train_scripts
from pyMSDtorch.core.networks import MSDNet, TUNet, TUNet3Plus
from pyMSDtorch.test_data import twoD
from pyMSDtorch.test_data.twoD import random_shapes
from pyMSDtorch.viz_tools import plots
from torchsummary import summary
import matplotlib.pyplot as plt
from torchmetrics import F1Score
Create Data¶
Using our pyMSDtorch in-house data generator, we produce a number of noisy “shapes” images consisting of single triangles, rectangles, circles, and donuts/annuli, each assigned a different class. In addition to augmenting with random orientations and sizes, each raw, ground truth image will be bundled with its corresponding noisy and binary mask.
n_imgs – number of ground truth/noisy/label image bundles to generate
noise_level – per-pixel noise drawn from a continuous uniform distribution (cut-off above at 1)
N_xy – size of individual images
[2]:
n_imgs = 300
noise_level = .75
n_xy = 32
img_dict = random_shapes.build_random_shape_set_numpy(n_imgs=n_imgs,
noise_level=noise_level,
n_xy=n_xy)
[3]:
ground_truth = img_dict['GroundTruth']
noisy = img_dict['Noisy']
mask = img_dict['ClassImage']
shape_id = img_dict['Label']
ground_truth = np.expand_dims(ground_truth, axis=1)
noisy = np.expand_dims(noisy, axis=1)
mask = np.expand_dims(mask, axis=1)
shape_id = np.expand_dims(shape_id, axis=1)
print('Verify date type and dimensionality: ', type(ground_truth), ground_truth.shape)
Verify date type and dimensionality: <class 'numpy.ndarray'> (300, 1, 32, 32)
View data¶
[4]:
plots.plot_shapes_data_numpy(img_dict)
Training/Validation/Testing Splits¶
Of the data we generated above, we partition it into non-overlapping subsets to be used for training, validation, and testing. (We somewhat arbitrarily choose a 80-10-10 percentage split).
As a refresher, the three subsets of data are used as follows:
training set – this data is used to fit the model,
validation set – passed through the network to give an unbiased evaluation during training (model does not learn from this data),
testing set – gives an unbiased evaluation of the final model once training is complete.
[5]:
# Split training set
n_train = int(0.8 * n_imgs)
training_imgs = noisy[0:n_train,...]
training_masks = mask[0:n_train,...]
# Split validation set
n_validation = int(0.1 * n_imgs)
validation_imgs = noisy[(n_train) : (n_train+n_validation),...]
validation_masks = mask[(n_train) : (n_train+n_validation),...]
# Split testing set
n_testing = int(0.1 * n_imgs)
testing_imgs = noisy[-n_testing:, ...]
testing_masks = mask[-n_testing:, ...]
# Cast data as tensors and get in PyTorch Dataset format
train_data = TensorDataset(torch.Tensor(training_imgs), torch.Tensor(training_masks))
val_data = TensorDataset(torch.Tensor(validation_imgs), torch.Tensor(validation_masks))
test_data = TensorDataset(torch.Tensor(testing_imgs), torch.Tensor(testing_masks))
Dataloader class¶
We make liberal use of the PyTorch Dataloader class for easy handling and iterative loading of data into the networks and models.
** Note ** The most important parameters to specify here are the batch_sizes, as these dictate how many images are loaded and passed through the network at a single time. By extension, controlling the batch size allows you to control the GPU/CPU usage. As a rule of thumb, the bigger the batch size, the better; this not only speeds up training, certain network normalization layers (e.g. BatchNorm2dbatch) become more stable with larger batches.
Dataloader Reference: https://pytorch.org/docs/stable/data.html
[6]:
# Specify batch sizes
batch_size_train = 50
batch_size_val = 50
batch_size_test = 50
# Set Dataloader parameters (Note: we randomly shuffle the training set upon each pass)
train_loader_params = {'batch_size': batch_size_train,
'shuffle': True}
val_loader_params = {'batch_size': batch_size_val,
'shuffle': False}
test_loader_params = {'batch_size': batch_size_test,
'shuffle': False}
# Build Dataloaders
train_loader = DataLoader(train_data, **train_loader_params)
val_loader = DataLoader(val_data, **val_loader_params)
test_loader = DataLoader(test_data, **test_loader_params)
Create Networks¶
Here we instantiate three different convolutional neural networks: a mixed-scale dense network (MSDNet), a tunable U-Net (TUNet), and TUNet3+, a variant that connects all length scales to all others.
Each network takes in a single grayscale channel and produces five output channels, one for each of the four shapes and one for background. Additionally, as is standard practice, each network applies a batch normalization and rectified linear unit activation (BatchNorm2d ==> ReLU) bundle after each convolution to expedite training.
** Note ** From the authors’ experiences, batch normalization has stabalized training in problems EXCECPT when data is strongly bimodal or with many (>90%) zeros (e.g. inpainting or masked data). This is likely due (though admittedly, hand-wavey) to the mean-shifting of the data ‘over-smoothing’ and losing the contrast between the two peaks of interest.
Vanilla MSDNet¶
The first is a mixed-scale dense convolutional neural network (MSDNet) which densely connects ALL input, convolutional, and output layers together and explores different length scales using dilated convolutions.
Some parameters to toggle:
num_layers – The number of convolutional layers that are densely-connected
max_dilation – the maximum dilation to cycle through (default is 10)
For more information, see pyMSDtorch/core/networks/MSDNet.py
[7]:
in_channels = 1
out_channels = 5
num_layers = 20
max_dilation = 8
activation = nn.ReLU()
normalization = nn.BatchNorm2d # Change to 3d for volumous data
[8]:
msdnet = MSDNet.MixedScaleDenseNetwork(in_channels = in_channels,
out_channels = out_channels,
num_layers=num_layers,
max_dilation = max_dilation,
activation = activation,
normalization = normalization,
convolution=nn.Conv2d # Change to 3d for volumous data
)
print('Number of parameters: ', helpers.count_parameters(msdnet))
print(msdnet)
Number of parameters: 2480
MixedScaleDenseNetwork(
(activation): ReLU()
(layer_0): MixedScaleDenseLayer(
(conv_0): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_0): ReLU()
(normalization_0): BatchNorm2d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_1): MixedScaleDenseLayer(
(conv_0): Conv2d(2, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_1): ReLU()
(normalization_1): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_2): MixedScaleDenseLayer(
(conv_0): Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3))
)
(activation_2): ReLU()
(normalization_2): BatchNorm2d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_3): MixedScaleDenseLayer(
(conv_0): Conv2d(4, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_3): ReLU()
(normalization_3): BatchNorm2d(5, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_4): MixedScaleDenseLayer(
(conv_0): Conv2d(5, 1, kernel_size=(3, 3), stride=(1, 1), padding=(5, 5), dilation=(5, 5))
)
(activation_4): ReLU()
(normalization_4): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_5): MixedScaleDenseLayer(
(conv_0): Conv2d(6, 1, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6))
)
(activation_5): ReLU()
(normalization_5): BatchNorm2d(7, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_6): MixedScaleDenseLayer(
(conv_0): Conv2d(7, 1, kernel_size=(3, 3), stride=(1, 1), padding=(7, 7), dilation=(7, 7))
)
(activation_6): ReLU()
(normalization_6): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_7): MixedScaleDenseLayer(
(conv_0): Conv2d(8, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_7): ReLU()
(normalization_7): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_8): MixedScaleDenseLayer(
(conv_0): Conv2d(9, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_8): ReLU()
(normalization_8): BatchNorm2d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_9): MixedScaleDenseLayer(
(conv_0): Conv2d(10, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_9): ReLU()
(normalization_9): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_10): MixedScaleDenseLayer(
(conv_0): Conv2d(11, 1, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3))
)
(activation_10): ReLU()
(normalization_10): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_11): MixedScaleDenseLayer(
(conv_0): Conv2d(12, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_11): ReLU()
(normalization_11): BatchNorm2d(13, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_12): MixedScaleDenseLayer(
(conv_0): Conv2d(13, 1, kernel_size=(3, 3), stride=(1, 1), padding=(5, 5), dilation=(5, 5))
)
(activation_12): ReLU()
(normalization_12): BatchNorm2d(14, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_13): MixedScaleDenseLayer(
(conv_0): Conv2d(14, 1, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6))
)
(activation_13): ReLU()
(normalization_13): BatchNorm2d(15, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_14): MixedScaleDenseLayer(
(conv_0): Conv2d(15, 1, kernel_size=(3, 3), stride=(1, 1), padding=(7, 7), dilation=(7, 7))
)
(activation_14): ReLU()
(normalization_14): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_15): MixedScaleDenseLayer(
(conv_0): Conv2d(16, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_15): ReLU()
(normalization_15): BatchNorm2d(17, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_16): MixedScaleDenseLayer(
(conv_0): Conv2d(17, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_16): ReLU()
(normalization_16): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_17): MixedScaleDenseLayer(
(conv_0): Conv2d(18, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_17): ReLU()
(normalization_17): BatchNorm2d(19, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_18): MixedScaleDenseLayer(
(conv_0): Conv2d(19, 1, kernel_size=(3, 3), stride=(1, 1), padding=(3, 3), dilation=(3, 3))
)
(activation_18): ReLU()
(normalization_18): BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_19): MixedScaleDenseLayer(
(conv_0): Conv2d(20, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_19): ReLU()
(normalization_19): BatchNorm2d(21, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(final_convolution): Conv2d(21, 5, kernel_size=(1, 1), stride=(1, 1))
)
MSDNet with custom dilations¶
As an alternative to MSDNets with repeated and cycling dilation sizes, we allow the user to input custom dilations in the form of a 1D numpy array.
For example, create a 20-layer network that cycles through increasing powers of two as dilations by passing the parameters
num_layers = 20
custom_MSDNet = np.array([1,2,4,8,16]).
[9]:
custom_MSDNet = np.array([1,2,4,8])
msdnet_custom = MSDNet.MixedScaleDenseNetwork(in_channels=in_channels,
out_channels=out_channels,
num_layers=num_layers,
custom_MSDNet=custom_MSDNet,
activation=activation,
normalization=normalization,
convolution=nn.Conv2d # Change to 3d for volumous data
)
print('Number of parameters: ', helpers.count_parameters(msdnet_custom))
print(msdnet_custom)
#summary(net, (in_channels, N_xy, N_xy))
Number of parameters: 2480
MixedScaleDenseNetwork(
(activation): ReLU()
(layer_0): MixedScaleDenseLayer(
(conv_0): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_0): ReLU()
(normalization_0): BatchNorm2d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_1): MixedScaleDenseLayer(
(conv_0): Conv2d(2, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_1): ReLU()
(normalization_1): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_2): MixedScaleDenseLayer(
(conv_0): Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_2): ReLU()
(normalization_2): BatchNorm2d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_3): MixedScaleDenseLayer(
(conv_0): Conv2d(4, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_3): ReLU()
(normalization_3): BatchNorm2d(5, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_4): MixedScaleDenseLayer(
(conv_0): Conv2d(5, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_4): ReLU()
(normalization_4): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_5): MixedScaleDenseLayer(
(conv_0): Conv2d(6, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_5): ReLU()
(normalization_5): BatchNorm2d(7, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_6): MixedScaleDenseLayer(
(conv_0): Conv2d(7, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_6): ReLU()
(normalization_6): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_7): MixedScaleDenseLayer(
(conv_0): Conv2d(8, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_7): ReLU()
(normalization_7): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_8): MixedScaleDenseLayer(
(conv_0): Conv2d(9, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_8): ReLU()
(normalization_8): BatchNorm2d(10, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_9): MixedScaleDenseLayer(
(conv_0): Conv2d(10, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_9): ReLU()
(normalization_9): BatchNorm2d(11, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_10): MixedScaleDenseLayer(
(conv_0): Conv2d(11, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_10): ReLU()
(normalization_10): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_11): MixedScaleDenseLayer(
(conv_0): Conv2d(12, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_11): ReLU()
(normalization_11): BatchNorm2d(13, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_12): MixedScaleDenseLayer(
(conv_0): Conv2d(13, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_12): ReLU()
(normalization_12): BatchNorm2d(14, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_13): MixedScaleDenseLayer(
(conv_0): Conv2d(14, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_13): ReLU()
(normalization_13): BatchNorm2d(15, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_14): MixedScaleDenseLayer(
(conv_0): Conv2d(15, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_14): ReLU()
(normalization_14): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_15): MixedScaleDenseLayer(
(conv_0): Conv2d(16, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_15): ReLU()
(normalization_15): BatchNorm2d(17, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_16): MixedScaleDenseLayer(
(conv_0): Conv2d(17, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(activation_16): ReLU()
(normalization_16): BatchNorm2d(18, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_17): MixedScaleDenseLayer(
(conv_0): Conv2d(18, 1, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
)
(activation_17): ReLU()
(normalization_17): BatchNorm2d(19, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_18): MixedScaleDenseLayer(
(conv_0): Conv2d(19, 1, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4))
)
(activation_18): ReLU()
(normalization_18): BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer_19): MixedScaleDenseLayer(
(conv_0): Conv2d(20, 1, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8))
)
(activation_19): ReLU()
(normalization_19): BatchNorm2d(21, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(final_convolution): Conv2d(21, 5, kernel_size=(1, 1), stride=(1, 1))
)
Tunable U-Net (TUNet)¶
Next, we create a custom U-Net with the following architecture-governing parameters
depth: the number of network layers
base_channels: number of initial channels
growth_rate: multiplicative growth factor of number of channels per layer of depth
hidden_rate: multiplicative growth factor of channels within each layer
Please note the two rate parameters can be non-integer numbers
As with MSDNets, the user has many more options to customize their TUNets, including the normalization and activation functions after each convolution. See pyMSDtorch/core/networks/TUNet.py for more.
Recommended parameters are depth = 4, 5, or 6; base_channels = 32 or 64; growth_rate between 1.5 and 2.5; and hidden_rate = 1
[10]:
image_shape = (n_xy, n_xy)
depth = 4
base_channels = 16
growth_rate = 2
hidden_rate = 1
[11]:
tunet = TUNet.TUNet(image_shape=image_shape,
in_channels=in_channels,
out_channels=out_channels,
depth=depth,
base_channels=base_channels,
growth_rate=growth_rate,
hidden_rate=hidden_rate,
activation=activation,
normalization=normalization,
)
print('Number of parameters: ', helpers.count_parameters(tunet))
print(tunet)
#summary(net, (in_channels, N_xy, N_xy))
Number of parameters: 483221
TUNet(
(activation): ReLU()
(Encode_0): Sequential(
(0): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Decode_0): Sequential(
(0): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): Conv2d(16, 5, kernel_size=(1, 1), stride=(1, 1))
)
(Step Down 0): MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=[0, 0], dilation=1, ceil_mode=False)
(Step Up 0): ConvTranspose2d(32, 16, kernel_size=(2, 2), stride=(2, 2))
(Encode_1): Sequential(
(0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Decode_1): Sequential(
(0): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Step Down 1): MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=[0, 0], dilation=1, ceil_mode=False)
(Step Up 1): ConvTranspose2d(64, 32, kernel_size=(2, 2), stride=(2, 2))
(Encode_2): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Decode_2): Sequential(
(0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Step Down 2): MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=[0, 0], dilation=1, ceil_mode=False)
(Step Up 2): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2))
(Final_layer_3): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
)
Tunable U-Net 3+ (TUNet3+)¶
pyMSDtorch allows the user to create a newer UNet variant called UNet3+. Whereas the original UNets shared information from encoder-to-decoder with single skip connections per layer (via concatenations across each layer’s matching dimensions), the UNet3+ architecture densely connects information from all layers to all other layers with cleverly vbuilt skip connections (upsample/downsampling to match spatial dimensions, convolutions to control channel growth, then concatenations).
The only additional parameter to declare:
carryover_channels – indicates the number of channels in each skip connection. Default of 0 sets this equal to base_channels
[12]:
carryover_channels = base_channels
[13]:
tunet3plus = TUNet3Plus.TUNet3Plus(image_shape=image_shape,
in_channels=in_channels,
out_channels=out_channels,
depth=depth,
base_channels=base_channels,
carryover_channels=carryover_channels,
growth_rate=growth_rate,
hidden_rate=hidden_rate,
activation=activation,
normalization=normalization,
)
print('Number of parameters: ', helpers.count_parameters(tunet3plus))
print(tunet3plus)
#summary(tunet3plus.cpu(), (in_channels, n_xy, n_xy))
Number of parameters: 437989
TUNet3Plus(
(activation): ReLU()
(Encode_0): Sequential(
(0): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Decode_0): Sequential(
(0): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): Conv2d(16, 5, kernel_size=(1, 1), stride=(1, 1))
)
(Step Down 0): MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=[0, 0], dilation=1, ceil_mode=False)
(Encode_1): Sequential(
(0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Decode_1): Sequential(
(0): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Step Down 1): MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=[0, 0], dilation=1, ceil_mode=False)
(Encode_2): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Decode_2): Sequential(
(0): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Step Down 2): MaxPool2d(kernel_size=[2, 2], stride=[2, 2], padding=[0, 0], dilation=1, ceil_mode=False)
(Final_layer_3): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
)
(Skip_connection_0_to_0): Sequential(
(0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(Skip_connection_0_to_1): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=[0, 0], dilation=1, ceil_mode=False)
(1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_0_to_2): Sequential(
(0): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=[0, 0], dilation=1, ceil_mode=False)
(1): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_1_to_0): Sequential(
(0): Upsample(size=[32, 32], mode=nearest)
(1): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_1_to_1): Sequential(
(0): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(Skip_connection_1_to_2): Sequential(
(0): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=[0, 0], dilation=1, ceil_mode=False)
(1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_2_to_0): Sequential(
(0): Upsample(size=[32, 32], mode=nearest)
(1): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_2_to_1): Sequential(
(0): Upsample(size=[16, 16], mode=nearest)
(1): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_2_to_2): Sequential(
(0): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(Skip_connection_3_to_0): Sequential(
(0): Upsample(size=[32, 32], mode=nearest)
(1): Conv2d(128, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_3_to_1): Sequential(
(0): Upsample(size=[16, 16], mode=nearest)
(1): Conv2d(128, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(Skip_connection_3_to_2): Sequential(
(0): Upsample(size=[8, 8], mode=nearest)
(1): Conv2d(128, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
)
Training our networks¶
Below, we start using PyTorch heavily. We define relevant training parameters, the training loop, then compare the two networks defined above.
All networks in this notebook with batch sizes of 50 use between 1.8 and 2.3 GBs of memory for training, easily attainable with even a moderately small GPU.
Set training parameters¶
[14]:
epochs = 50 # Set number of epochs
criterion = nn.CrossEntropyLoss() # For segmenting >2 classes
LEARNING_RATE = 5e-3
# Define optimizers, one per network
optimizer_msd = optim.Adam(msdnet.parameters(), lr=LEARNING_RATE)
optimizer_tunet = optim.Adam(tunet.parameters(), lr=LEARNING_RATE)
optimizer_tunet3plus = optim.Adam(tunet3plus.parameters(), lr=LEARNING_RATE)
device = helpers.get_device()
print('Device we will compute on: ', device) # cuda:0 for GPU. Else, CPU
Device we will compute on: cuda:0
Train MSDNet¶
[15]:
msdnet.to(device) # send network to GPU
msdnet, results = train_scripts.train_segmentation(msdnet,
train_loader,
val_loader,
epochs,
criterion,
optimizer_msd,
device,
show=10) # training happens here
fig = plots.plot_training_results_segmentation(results)
fig.show()
msdnet = msdnet.cpu()
# clear out unnecessary variables from device (GPU) memory
torch.cuda.empty_cache()
Epoch 10 of 50 | Learning rate 5.000e-03
Training Loss: 2.7181e-01 | Validation Loss: 2.6796e-01
Micro Training F1: 0.9139 | Micro Validation F1: 0.9026
Macro Training F1: 0.5394 | Macro Validation F1: 0.5100
Epoch 20 of 50 | Learning rate 5.000e-03
Training Loss: 1.5185e-01 | Validation Loss: 1.6198e-01
Micro Training F1: 0.9470 | Micro Validation F1: 0.9378
Macro Training F1: 0.7066 | Macro Validation F1: 0.6510
Epoch 30 of 50 | Learning rate 5.000e-03
Training Loss: 9.9341e-02 | Validation Loss: 1.1106e-01
Micro Training F1: 0.9679 | Micro Validation F1: 0.9627
Macro Training F1: 0.8170 | Macro Validation F1: 0.7764
Epoch 40 of 50 | Learning rate 5.000e-03
Training Loss: 8.1147e-02 | Validation Loss: 9.7059e-02
Micro Training F1: 0.9732 | Micro Validation F1: 0.9688
Macro Training F1: 0.8526 | Macro Validation F1: 0.8120
Epoch 50 of 50 | Learning rate 5.000e-03
Training Loss: 7.5889e-02 | Validation Loss: 8.5717e-02
Micro Training F1: 0.9743 | Micro Validation F1: 0.9738
Macro Training F1: 0.8606 | Macro Validation F1: 0.8582
Train TUNet¶
[16]:
tunet.to(device) # send network to GPU
tunet, results = train_scripts.train_segmentation(tunet,
train_loader,
val_loader,
epochs,
criterion,
optimizer_tunet,
device,
show=10) # training happens here
tunet = tunet.cpu()
fig = plots.plot_training_results_segmentation(results)
fig.show()
# clear out unnecessary variables from device (GPU) memory
torch.cuda.empty_cache()
Epoch 10 of 50 | Learning rate 5.000e-03
Training Loss: 2.6015e-01 | Validation Loss: 2.4457e-01
Micro Training F1: 0.9405 | Micro Validation F1: 0.9348
Macro Training F1: 0.6458 | Macro Validation F1: 0.6188
Epoch 20 of 50 | Learning rate 5.000e-03
Training Loss: 1.4697e-01 | Validation Loss: 1.1512e-01
Micro Training F1: 0.9504 | Micro Validation F1: 0.9640
Macro Training F1: 0.6980 | Macro Validation F1: 0.7603
Epoch 30 of 50 | Learning rate 5.000e-03
Training Loss: 5.5660e-02 | Validation Loss: 7.2039e-02
Micro Training F1: 0.9896 | Micro Validation F1: 0.9787
Macro Training F1: 0.9674 | Macro Validation F1: 0.8999
Epoch 40 of 50 | Learning rate 5.000e-03
Training Loss: 6.6224e-02 | Validation Loss: 7.5655e-02
Micro Training F1: 0.9809 | Micro Validation F1: 0.9712
Macro Training F1: 0.9035 | Macro Validation F1: 0.8533
Epoch 50 of 50 | Learning rate 5.000e-03
Training Loss: 3.2709e-02 | Validation Loss: 7.5054e-02
Micro Training F1: 0.9898 | Micro Validation F1: 0.9792
Macro Training F1: 0.9694 | Macro Validation F1: 0.9049
Train TUNet3+¶
[17]:
torch.cuda.empty_cache()
tunet3plus.to(device) # send network to GPU
tunet3plus, results = train_scripts.train_segmentation(tunet3plus,
train_loader,
val_loader,
epochs,
criterion,
optimizer_tunet3plus,
device,
show=10) # training happens here
tunet3plus = tunet3plus.cpu()
fig = plots.plot_training_results_segmentation(results)
fig.show()
# clear out unnecessary variables from device (GPU) memory
torch.cuda.empty_cache()
Epoch 10 of 50 | Learning rate 5.000e-03
Training Loss: 2.2402e-01 | Validation Loss: 1.9719e-01
Micro Training F1: 0.9628 | Micro Validation F1: 0.9632
Macro Training F1: 0.7347 | Macro Validation F1: 0.7355
Epoch 20 of 50 | Learning rate 5.000e-03
Training Loss: 6.6756e-02 | Validation Loss: 7.2874e-02
Micro Training F1: 0.9860 | Micro Validation F1: 0.9806
Macro Training F1: 0.9511 | Macro Validation F1: 0.9146
Epoch 30 of 50 | Learning rate 5.000e-03
Training Loss: 3.3087e-02 | Validation Loss: 6.6920e-02
Micro Training F1: 0.9912 | Micro Validation F1: 0.9839
Macro Training F1: 0.9768 | Macro Validation F1: 0.9265
Epoch 40 of 50 | Learning rate 5.000e-03
Training Loss: 2.5198e-02 | Validation Loss: 5.9633e-02
Micro Training F1: 0.9921 | Micro Validation F1: 0.9853
Macro Training F1: 0.9789 | Macro Validation F1: 0.9327
Epoch 50 of 50 | Learning rate 5.000e-03
Training Loss: 3.3653e-02 | Validation Loss: 6.0781e-02
Micro Training F1: 0.9894 | Micro Validation F1: 0.9816
Macro Training F1: 0.9682 | Macro Validation F1: 0.9139
Testing our networks¶
Now we pass our testing set images through all the networks network. We’ll print out some network predictions and report the multi-class micro adn macro F1 scores, common metrics for gauging network performance.
[18]:
# Define F1 score parameters and classes
num_classes = out_channels
F1_eval_macro = F1Score(task='multiclass',
num_classes=num_classes,
average='macro',
mdmc_average='global')
F1_eval_micro = F1Score(task='multiclass',
num_classes=num_classes,
average='micro',
mdmc_average='global')
# preallocate
microF1_tunet = 0
microF1_tunet3plus = 0
microF1_msdnet = 0
macroF1_tunet = 0
macroF1_tunet3plus = 0
macroF1_msdnet = 0
counter = 0
# Number of testing predictions to display
num_images = 20
num_images = np.min((num_images, batch_size_test))
device = "cpu"
for batch in test_loader:
with torch.no_grad():
#net.eval() # Bad... this ignores the batchnorm parameters
noisy, target = batch
# Necessary data recasting
noisy = noisy.type(torch.FloatTensor)
target = target.type(torch.IntTensor)
noisy = noisy.to(device)
target = target.to(device).squeeze(1)
# Input passed through networks here
output_tunet = tunet(noisy)
output_tunet3plus = tunet3plus(noisy)
output_msdnet = msdnet(noisy)
# Individual output passed through argmax to get predictions
preds_tunet = torch.argmax(output_tunet.cpu().data, dim=1)
preds_tunet3plus = torch.argmax(output_tunet3plus.cpu().data, dim=1)
preds_msdnet = torch.argmax(output_msdnet.cpu().data, dim=1)
shrink=0.7
for j in range(num_images):
print(f'Images for batch # {counter}, number {j}')
plt.figure(figsize=(22,5))
# Display noisy input
plt.subplot(151)
plt.imshow(noisy.cpu()[j,0,:,:].data)
plt.colorbar(shrink=shrink)
plt.title('Noisy')
# Display tunet predictions
plt.subplot(152)
plt.imshow(preds_tunet[j,...])
plt.colorbar(shrink=shrink)
plt.clim(0,4)
plt.title('TUNet Prediction')
# Display tunet3+ predictions
plt.subplot(153)
plt.imshow(preds_tunet3plus[j,...])
plt.colorbar(shrink=shrink)
plt.clim(0,4)
plt.title('TUNet3+ Prediction')
# Display msdnet predictions
plt.subplot(154)
plt.imshow(preds_msdnet[j,...])
plt.colorbar(shrink=shrink)
plt.clim(0,4)
plt.title('MSDNet Prediction')
# Display masks/ground truth
plt.subplot(155)
plt.imshow(target.cpu()[j,:,:].data)
plt.colorbar(shrink=shrink)
plt.clim(0,4)
plt.title('Mask')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
plt.show()
counter+=1
# Track F1 scores for both networks
microF1_tunet += F1_eval_micro(preds_tunet.cpu(), target.cpu())
macroF1_tunet += F1_eval_macro(preds_tunet.cpu(), target.cpu())
microF1_tunet3plus += F1_eval_micro(preds_tunet3plus.cpu(), target.cpu())
macroF1_tunet3plus += F1_eval_macro(preds_tunet3plus.cpu(), target.cpu())
microF1_msdnet += F1_eval_micro(preds_msdnet.cpu(), target.cpu())
macroF1_msdnet += F1_eval_macro(preds_msdnet.cpu(), target.cpu())
# clear out unnecessary variables from device (GPU) memory
torch.cuda.empty_cache()
Images for batch # 0, number 0

Images for batch # 0, number 1

Images for batch # 0, number 2

Images for batch # 0, number 3

Images for batch # 0, number 4

Images for batch # 0, number 5

Images for batch # 0, number 6

Images for batch # 0, number 7

Images for batch # 0, number 8

Images for batch # 0, number 9

Images for batch # 0, number 10

Images for batch # 0, number 11

Images for batch # 0, number 12

Images for batch # 0, number 13

Images for batch # 0, number 14

Images for batch # 0, number 15

Images for batch # 0, number 16

Images for batch # 0, number 17

Images for batch # 0, number 18

Images for batch # 0, number 19

[19]:
microF1_tunet = microF1_tunet / len(test_loader)
macroF1_tunet = macroF1_tunet / len(test_loader)
print('Metrics w.r.t. TUNet')
print("Number of parameters: ", helpers.count_parameters(tunet))
print('Micro F1 score is : ', microF1_tunet.item() )
print('Macro F1 score is : ', macroF1_tunet.item() )
print()
print()
microF1_tunet3plus = microF1_tunet3plus / len(test_loader)
macroF1_tunet3plus3plus = macroF1_tunet3plus / len(test_loader)
print('Metrics w.r.t. TUNet3+')
print("Number of parameters: ", helpers.count_parameters(tunet3plus))
print('Micro F1 score is : ', microF1_tunet3plus.item())
print('Macro F1 score is : ', macroF1_tunet3plus.item())
print()
print()
microF1_msdnet = microF1_msdnet / len(test_loader)
macroF1_msdnet = macroF1_msdnet / len(test_loader)
print('Metrics w.r.t. MSDNet')
print("Number of parameters: ", helpers.count_parameters(msdnet))
print('Micro F1 score is : ', microF1_msdnet.item())
print('Macro F1 score is : ', macroF1_msdnet.item())
print()
print()
Metrics w.r.t. TUNet
Number of parameters: 483221
Micro F1 score is : 0.9873046875
Macro F1 score is : 0.9648054838180542
Metrics w.r.t. TUNet3+
Number of parameters: 437989
Micro F1 score is : 0.9853515625
Macro F1 score is : 0.9522019624710083
Metrics w.r.t. MSDNet
Number of parameters: 2480
Micro F1 score is : 0.9788411259651184
Macro F1 score is : 0.9051846265792847
Ensemble Learning with Randomized Sparse Mixed-Scale Networks¶
Authors: Eric Roberts and Petrus Zwart
E-mail: PHZwart@lbl.gov, EJRoberts@lbl.gov ___
This notebook highlights some basic functionality with the pyMSDtorch package.
We will train 13 different randomized sparse mixed-scale networks (SMSNets) to perform binary segmentation of retinal vessels on the Structured Analysis of the Retina (STARE) dataset.
After training, we combine the best performing networks into a single estimator and return both the mean and standard deviation of the estimated class probabilities. We subsequently use conformal estimation to get calibrated conformal sets that are guaranteed to contain the right label, with user-specified probability. ___
Imports and helper functions¶
[1]:
import numpy as np
import pandas as pd
import math
import torch
import torch.nn as nn
from torch.nn import functional
import torch.optim as optim
from torch.utils.data import TensorDataset
import torchvision
from torchvision import transforms
from pyMSDtorch.core import helpers
from pyMSDtorch.core import train_scripts
from pyMSDtorch.core.networks import SMSNet
from pyMSDtorch.core.networks import baggins
from pyMSDtorch.core.conformalize import conformalize_segmentation
from pyMSDtorch.viz_tools import plots
from pyMSDtorch.viz_tools import draw_sparse_network
import matplotlib.pyplot as plt
import pickle
import gc
import einops
import os
[2]:
# we need to unzip images
import gzip, shutil, fnmatch
def gunzip(file_path,output_path):
with gzip.open(file_path,"rb") as f_in, open(output_path,"wb") as f_out:
shutil.copyfileobj(f_in, f_out)
os.remove(file_path)
def unzip_directory(directory):
walker = os.walk(directory)
for directory,dirs,files in walker:
for f in files:
if fnmatch.fnmatch(f,"*.gz"):
gunzip(directory+f,directory+f.replace(".gz",""))
Download and view data¶
First, we need to download the STARE data, a dataset for semantic segmentation of retinal blood vessel commonly used as a benchmark.
All data will be stored in a freshly created directory titled tmp/STARE_DATA
[3]:
import requests, tarfile
# make directories
path_to_data = "/tmp/"
if not os.path.isdir(path_to_data+'STARE_DATA'):
os.mkdir(path_to_data+'STARE_DATA')
os.mkdir(path_to_data+'STARE_DATA/images')
os.mkdir(path_to_data+'STARE_DATA/labels')
# get the data first
url = 'https://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar'
r = requests.get(url, allow_redirects=True)
tmp = open(path_to_data+'STARE_DATA/stare-vessels.tar', 'wb').write(r.content)
my_tar = tarfile.open(path_to_data+'STARE_DATA/stare-vessels.tar')
my_tar.extractall(path_to_data+'STARE_DATA/images/')
my_tar.close()
unzip_directory(path_to_data+'STARE_DATA/images/')
# get the ah-labels
url = 'https://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar'
r = requests.get(url, allow_redirects=True)
tmp = open(path_to_data+'STARE_DATA/labels-ah.tar', 'wb').write(r.content)
my_tar = tarfile.open(path_to_data+'STARE_DATA/labels-ah.tar')
my_tar.extractall(path_to_data+'STARE_DATA/labels/')
my_tar.close()
unzip_directory(path_to_data+'STARE_DATA/labels/')
Transform data¶
Here we cast all images from numpy arrays to pytorch tensors and prep data for training
[4]:
dataset = torchvision.datasets.ImageFolder(path_to_data+"STARE_DATA/", transform=transforms.ToTensor())
images = [np.array(dataset[i][0].permute(1,2,0)) for i in range(len(dataset)) if dataset[i][1] == 0]
images = torch.stack([torch.Tensor(image).permute(2, 0, 1) for image in images])
labels = torch.stack([dataset[i][0] for i in range(len(dataset)) if dataset[i][1] == 1])
labels = torch.sum(labels, dim=1)
labels = torch.unsqueeze(labels, 1)
labels = torch.where(labels != 0, 1, 0)
#make if divisional friendly
images = images[:,:,:600,:]
labels = labels[:,:,:600,:]
downsample_factor=2
images = functional.interpolate(images,
size=(images.shape[-2]//downsample_factor,
images.shape[-1]//downsample_factor),
mode="bilinear")
labels = functional.interpolate(labels.type(torch.FloatTensor),
size=(labels.shape[-2]//downsample_factor,
labels.shape[-1]//downsample_factor),
mode="nearest")
all_ds = TensorDataset(images,labels)
test_ds = TensorDataset(images[0:2],labels[0:2].type(torch.LongTensor))
val_ds = TensorDataset(images[2:3],labels[2:3].type(torch.LongTensor))
train_ds = TensorDataset(images[3:],labels[3:].type(torch.LongTensor))
print("Size of train dataset:", len(train_ds))
print("Size of validation dataset:", len(val_ds))
print("Size of test dataset:", len(test_ds))
Size of train dataset: 17
Size of validation dataset: 1
Size of test dataset: 2
View data¶
[5]:
params = {}
params["title"]="Image and Labels"
img = images[0].permute(1,2,0)
lbl = labels[0][0]
fig = plots.plot_rgb_and_labels(img.numpy(), lbl.numpy(), params)
fig.update_layout(width=700)
#plt.tight_layout()
fig.show()
[6]:
params = {}
params["title"]="Image and Labels"
img = images[0].permute(1,2,0)
lbl = labels[0][0]
fig = plots.plot_rgb_and_labels(img.numpy(), lbl.numpy(), params)
fig.update_layout(width=700)
plt.tight_layout()
fig.show()
<Figure size 432x288 with 0 Axes>
Dataloader class¶
We make liberal use of the PyTorch Dataloader class for easy handling and iterative loading of data into the networks and models.
With the chosen batch_size of 2, training requires roughly 4.5 to 6.0 GBs of GPU memory. Please note, memory comsumption is not static as network connectivity/sparsity is not static.
[7]:
# create data loaders
num_workers = 0
train_loader_params = {'batch_size': 2,
'shuffle': True,
'num_workers': num_workers,
'pin_memory':False,
'drop_last': False}
test_loader_params = {'batch_size': len(test_ds),
'shuffle': False,
'num_workers': num_workers,
'pin_memory':False,
'drop_last': False}
train_loader = torch.utils.data.DataLoader(train_ds, **train_loader_params)
val_loader = torch.utils.data.DataLoader(val_ds, **train_loader_params)
test_loader = torch.utils.data.DataLoader(test_ds, **test_loader_params)
print(train_ds.tensors[0].shape)
torch.Size([17, 3, 300, 350])
Create random sparse networks¶
Define SMSNet (Sparse Mixed-Scale Network) architecture-governing hyperparameters here.
Specify hyperparameters¶
First, each random network will have the same number of layers/nodes. These hyperparameters dicate the layout, or topology, of all networks.
[8]:
in_channels = 3 # RGB input image
out_channels = 2 # binary output
# we will use 15 hidden layers (typical MSDNets are >50)
num_layers = 15
Next, the hyperparameters below govern the random network connectivity. Choices include:
alpha : modifies distribution of consecutive connection length between network layers/nodes,
gamma : modifies distribution of of layer/node degree,
IL : probability of connection between Input node and Layer node,
IO : probability of connection between Input node and Output node,
LO : probability of connection between Layer node and Output node,
dilation_choices : set of possible dilations along each individual node connection
The specific parameters and what they do are described in detail in the documentation. Please follow minor minor comments below for a more cursory explanation.
[9]:
# When alpha > 0, short-range skip connections are favoured
alpha = 0.50
# When gamma is 0, the degree of each node is chosen uniformly between 0 and max_k
# specifically, P(degree) \propto degree^-gamma
gamma = 0.0
# we can limit the maximum and minimum degree of our graph
max_k = 5
min_k = 3
# features channel posibilities per edge
hidden_out_channels = [10]
# possible dilation choices
dilation_choices = [1,2,3,4,8,16]
# Here are some parameters that define how networks are drawn at random
# the layer probabilities dictionairy define connections
layer_probabilities={'LL_alpha':alpha,
'LL_gamma': gamma,
'LL_max_degree':max_k,
'LL_min_degree':min_k,
'IL': 0.25,
'LO': 0.25,
'IO': True}
# if desired, one can introduce scale changes (down and upsample)
# a not-so-thorough look indicates that this isn't really super beneficial
# in the model systems we looked at
sizing_settings = {'stride_base':2, #better keep this at 2
'min_power': 0,
'max_power': 0}
# defines the type of network we want to build
network_type = "Classification"
Build networks and train¶
We specify the number of random networks to initialize and the number of epochs for each is trained.
[10]:
# build the networks
nets = [] # we want to store a number of them
performance = [] # and keep track of how well they do
N_networks = 7 # number of random networks to create
epochs = 100 # set number of training epochs
Training loop¶
Now we cycle through each individual network and train.
[11]:
for ii in range(N_networks):
torch.cuda.empty_cache()
print("Network %i"%(ii+1))
net = SMSNet.random_SMS_network(in_channels=in_channels,
out_channels=out_channels,
in_shape=(300,300),
out_shape=(300,300),
sizing_settings=sizing_settings,
layers=num_layers,\
dilation_choices=dilation_choices,
hidden_out_channels=hidden_out_channels,
layer_probabilities=layer_probabilities,
network_type=network_type)
# lets plot the network
net_plot,dil_plot,chan_plot = draw_sparse_network.draw_network(net)
plt.show()
nets.append(net)
print("Start training")
pytorch_total_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
print("Total number of refineable parameters: ", pytorch_total_params)
weights = torch.tensor([1.0,2.0]).to('cuda')
criterion = nn.CrossEntropyLoss(weight=weights) # For segmenting
LEARNING_RATE = 1e-3
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)
device = helpers.get_device()
net = net.to(device)
tmp = train_scripts.train_segmentation(net,
train_loader,
test_loader,
epochs,
criterion,
optimizer,
device,
show=10)
performance.append(tmp[1]["F1 validation macro"][tmp[1]["Best model index"]])
net.save_network_parameters("stare_sms_%i.pt"%ii)
net = net.cpu()
plots.plot_training_results_segmentation(tmp[1]).show()
# clear out unnecessary variables from device (GPU) memory after each network
torch.cuda.empty_cache()
Network 1



Start training
Total number of refineable parameters: 93142
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.5782e-01 | Validation Loss: 2.0355e-01
Micro Training F1: 0.9554 | Micro Validation F1: 0.9441
Macro Training F1: 0.8407 | Macro Validation F1: 0.7878
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.1057e-01 | Validation Loss: 1.4875e-01
Micro Training F1: 0.9674 | Micro Validation F1: 0.9567
Macro Training F1: 0.8885 | Macro Validation F1: 0.8471
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 9.3169e-02 | Validation Loss: 1.4880e-01
Micro Training F1: 0.9712 | Micro Validation F1: 0.9602
Macro Training F1: 0.9030 | Macro Validation F1: 0.8555
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 8.1003e-02 | Validation Loss: 1.4620e-01
Micro Training F1: 0.9741 | Micro Validation F1: 0.9606
Macro Training F1: 0.9105 | Macro Validation F1: 0.8579
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 7.1612e-02 | Validation Loss: 1.4713e-01
Micro Training F1: 0.9770 | Micro Validation F1: 0.9599
Macro Training F1: 0.9227 | Macro Validation F1: 0.8598
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 6.8519e-02 | Validation Loss: 1.5866e-01
Micro Training F1: 0.9783 | Micro Validation F1: 0.9578
Macro Training F1: 0.9265 | Macro Validation F1: 0.8530
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 6.0143e-02 | Validation Loss: 1.4853e-01
Micro Training F1: 0.9804 | Micro Validation F1: 0.9633
Macro Training F1: 0.9346 | Macro Validation F1: 0.8676
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 5.5752e-02 | Validation Loss: 1.7131e-01
Micro Training F1: 0.9813 | Micro Validation F1: 0.9622
Macro Training F1: 0.9369 | Macro Validation F1: 0.8583
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 4.9858e-02 | Validation Loss: 1.7565e-01
Micro Training F1: 0.9836 | Micro Validation F1: 0.9613
Macro Training F1: 0.9444 | Macro Validation F1: 0.8582
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 4.8863e-02 | Validation Loss: 1.7179e-01
Micro Training F1: 0.9839 | Micro Validation F1: 0.9622
Macro Training F1: 0.9454 | Macro Validation F1: 0.8634
Network 2



Start training
Total number of refineable parameters: 96662
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.5478e-01 | Validation Loss: 1.9002e-01
Micro Training F1: 0.9563 | Micro Validation F1: 0.9434
Macro Training F1: 0.8478 | Macro Validation F1: 0.7989
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.1861e-01 | Validation Loss: 1.6209e-01
Micro Training F1: 0.9648 | Micro Validation F1: 0.9536
Macro Training F1: 0.8808 | Macro Validation F1: 0.8314
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 9.4062e-02 | Validation Loss: 1.5499e-01
Micro Training F1: 0.9715 | Micro Validation F1: 0.9544
Macro Training F1: 0.9026 | Macro Validation F1: 0.8386
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 8.0209e-02 | Validation Loss: 1.4227e-01
Micro Training F1: 0.9758 | Micro Validation F1: 0.9582
Macro Training F1: 0.9171 | Macro Validation F1: 0.8557
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 7.1240e-02 | Validation Loss: 1.5368e-01
Micro Training F1: 0.9772 | Micro Validation F1: 0.9604
Macro Training F1: 0.9234 | Macro Validation F1: 0.8528
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 6.2212e-02 | Validation Loss: 1.6170e-01
Micro Training F1: 0.9800 | Micro Validation F1: 0.9585
Macro Training F1: 0.9308 | Macro Validation F1: 0.8533
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 5.5389e-02 | Validation Loss: 1.7431e-01
Micro Training F1: 0.9816 | Micro Validation F1: 0.9566
Macro Training F1: 0.9377 | Macro Validation F1: 0.8497
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 5.1172e-02 | Validation Loss: 1.7527e-01
Micro Training F1: 0.9830 | Micro Validation F1: 0.9597
Macro Training F1: 0.9420 | Macro Validation F1: 0.8565
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 4.6638e-02 | Validation Loss: 1.8515e-01
Micro Training F1: 0.9848 | Micro Validation F1: 0.9610
Macro Training F1: 0.9484 | Macro Validation F1: 0.8591
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 4.2027e-02 | Validation Loss: 2.3446e-01
Micro Training F1: 0.9860 | Micro Validation F1: 0.9573
Macro Training F1: 0.9522 | Macro Validation F1: 0.8398
Network 3



Start training
Total number of refineable parameters: 88842
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.5973e-01 | Validation Loss: 1.8774e-01
Micro Training F1: 0.9553 | Micro Validation F1: 0.9480
Macro Training F1: 0.8457 | Macro Validation F1: 0.8043
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.2663e-01 | Validation Loss: 1.6645e-01
Micro Training F1: 0.9623 | Micro Validation F1: 0.9534
Macro Training F1: 0.8723 | Macro Validation F1: 0.8274
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 9.6363e-02 | Validation Loss: 1.6022e-01
Micro Training F1: 0.9699 | Micro Validation F1: 0.9560
Macro Training F1: 0.8959 | Macro Validation F1: 0.8402
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 8.4787e-02 | Validation Loss: 1.4799e-01
Micro Training F1: 0.9736 | Micro Validation F1: 0.9578
Macro Training F1: 0.9102 | Macro Validation F1: 0.8507
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 7.9941e-02 | Validation Loss: 1.4533e-01
Micro Training F1: 0.9753 | Micro Validation F1: 0.9581
Macro Training F1: 0.9168 | Macro Validation F1: 0.8540
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 7.3292e-02 | Validation Loss: 1.4990e-01
Micro Training F1: 0.9766 | Micro Validation F1: 0.9586
Macro Training F1: 0.9206 | Macro Validation F1: 0.8555
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 6.6868e-02 | Validation Loss: 1.4465e-01
Micro Training F1: 0.9787 | Micro Validation F1: 0.9621
Macro Training F1: 0.9276 | Macro Validation F1: 0.8626
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 6.2944e-02 | Validation Loss: 1.4611e-01
Micro Training F1: 0.9798 | Micro Validation F1: 0.9587
Macro Training F1: 0.9312 | Macro Validation F1: 0.8598
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 5.6340e-02 | Validation Loss: 1.6274e-01
Micro Training F1: 0.9811 | Micro Validation F1: 0.9626
Macro Training F1: 0.9372 | Macro Validation F1: 0.8613
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 5.2697e-02 | Validation Loss: 1.6804e-01
Micro Training F1: 0.9828 | Micro Validation F1: 0.9612
Macro Training F1: 0.9417 | Macro Validation F1: 0.8585
Network 4



Start training
Total number of refineable parameters: 87442
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.5691e-01 | Validation Loss: 1.7324e-01
Micro Training F1: 0.9563 | Micro Validation F1: 0.9515
Macro Training F1: 0.8480 | Macro Validation F1: 0.8234
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.2674e-01 | Validation Loss: 1.6609e-01
Micro Training F1: 0.9632 | Micro Validation F1: 0.9470
Macro Training F1: 0.8748 | Macro Validation F1: 0.8227
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 1.0880e-01 | Validation Loss: 1.5844e-01
Micro Training F1: 0.9671 | Micro Validation F1: 0.9527
Macro Training F1: 0.8849 | Macro Validation F1: 0.8374
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 9.2915e-02 | Validation Loss: 1.5283e-01
Micro Training F1: 0.9714 | Micro Validation F1: 0.9541
Macro Training F1: 0.9034 | Macro Validation F1: 0.8428
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 8.8904e-02 | Validation Loss: 1.4339e-01
Micro Training F1: 0.9726 | Micro Validation F1: 0.9580
Macro Training F1: 0.9079 | Macro Validation F1: 0.8541
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 8.1814e-02 | Validation Loss: 1.4044e-01
Micro Training F1: 0.9747 | Micro Validation F1: 0.9576
Macro Training F1: 0.9151 | Macro Validation F1: 0.8577
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 7.7082e-02 | Validation Loss: 1.3705e-01
Micro Training F1: 0.9753 | Micro Validation F1: 0.9604
Macro Training F1: 0.9172 | Macro Validation F1: 0.8644
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 7.6364e-02 | Validation Loss: 1.6010e-01
Micro Training F1: 0.9751 | Micro Validation F1: 0.9558
Macro Training F1: 0.9164 | Macro Validation F1: 0.8472
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 7.0999e-02 | Validation Loss: 1.4520e-01
Micro Training F1: 0.9772 | Micro Validation F1: 0.9601
Macro Training F1: 0.9224 | Macro Validation F1: 0.8615
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 6.6890e-02 | Validation Loss: 1.6420e-01
Micro Training F1: 0.9782 | Micro Validation F1: 0.9557
Macro Training F1: 0.9269 | Macro Validation F1: 0.8490
Network 5



Start training
Total number of refineable parameters: 73752
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.5689e-01 | Validation Loss: 1.9027e-01
Micro Training F1: 0.9574 | Micro Validation F1: 0.9404
Macro Training F1: 0.8445 | Macro Validation F1: 0.7981
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.1690e-01 | Validation Loss: 1.6492e-01
Micro Training F1: 0.9658 | Micro Validation F1: 0.9443
Macro Training F1: 0.8815 | Macro Validation F1: 0.8215
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 9.8796e-02 | Validation Loss: 1.5999e-01
Micro Training F1: 0.9713 | Micro Validation F1: 0.9489
Macro Training F1: 0.9003 | Macro Validation F1: 0.8333
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 8.6053e-02 | Validation Loss: 1.3982e-01
Micro Training F1: 0.9730 | Micro Validation F1: 0.9589
Macro Training F1: 0.9094 | Macro Validation F1: 0.8571
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 7.7568e-02 | Validation Loss: 1.5044e-01
Micro Training F1: 0.9746 | Micro Validation F1: 0.9587
Macro Training F1: 0.9152 | Macro Validation F1: 0.8539
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 7.0787e-02 | Validation Loss: 1.5336e-01
Micro Training F1: 0.9767 | Micro Validation F1: 0.9618
Macro Training F1: 0.9229 | Macro Validation F1: 0.8614
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 6.4428e-02 | Validation Loss: 1.4517e-01
Micro Training F1: 0.9787 | Micro Validation F1: 0.9623
Macro Training F1: 0.9288 | Macro Validation F1: 0.8672
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 6.0988e-02 | Validation Loss: 1.6924e-01
Micro Training F1: 0.9795 | Micro Validation F1: 0.9617
Macro Training F1: 0.9290 | Macro Validation F1: 0.8607
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 5.7649e-02 | Validation Loss: 1.6461e-01
Micro Training F1: 0.9809 | Micro Validation F1: 0.9606
Macro Training F1: 0.9357 | Macro Validation F1: 0.8611
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 5.5141e-02 | Validation Loss: 1.7404e-01
Micro Training F1: 0.9816 | Micro Validation F1: 0.9598
Macro Training F1: 0.9381 | Macro Validation F1: 0.8597
Network 6



Start training
Total number of refineable parameters: 82622
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.4776e-01 | Validation Loss: 1.8072e-01
Micro Training F1: 0.9581 | Micro Validation F1: 0.9484
Macro Training F1: 0.8570 | Macro Validation F1: 0.8110
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.0722e-01 | Validation Loss: 1.4885e-01
Micro Training F1: 0.9677 | Micro Validation F1: 0.9569
Macro Training F1: 0.8896 | Macro Validation F1: 0.8490
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 9.1875e-02 | Validation Loss: 1.4276e-01
Micro Training F1: 0.9722 | Micro Validation F1: 0.9589
Macro Training F1: 0.9052 | Macro Validation F1: 0.8535
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 7.7978e-02 | Validation Loss: 1.5321e-01
Micro Training F1: 0.9757 | Micro Validation F1: 0.9557
Macro Training F1: 0.9180 | Macro Validation F1: 0.8474
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 6.9256e-02 | Validation Loss: 1.4092e-01
Micro Training F1: 0.9780 | Micro Validation F1: 0.9623
Macro Training F1: 0.9243 | Macro Validation F1: 0.8643
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 6.1645e-02 | Validation Loss: 1.5423e-01
Micro Training F1: 0.9801 | Micro Validation F1: 0.9611
Macro Training F1: 0.9320 | Macro Validation F1: 0.8603
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 6.1683e-02 | Validation Loss: 1.6870e-01
Micro Training F1: 0.9797 | Micro Validation F1: 0.9608
Macro Training F1: 0.9323 | Macro Validation F1: 0.8531
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 5.1063e-02 | Validation Loss: 1.6402e-01
Micro Training F1: 0.9831 | Micro Validation F1: 0.9626
Macro Training F1: 0.9421 | Macro Validation F1: 0.8618
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 4.7829e-02 | Validation Loss: 1.7563e-01
Micro Training F1: 0.9839 | Micro Validation F1: 0.9623
Macro Training F1: 0.9455 | Macro Validation F1: 0.8608
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 4.5093e-02 | Validation Loss: 1.8906e-01
Micro Training F1: 0.9851 | Micro Validation F1: 0.9621
Macro Training F1: 0.9488 | Macro Validation F1: 0.8590
Network 7



Start training
Total number of refineable parameters: 80312
Epoch 10 of 100 | Learning rate 1.000e-03
Training Loss: 1.5356e-01 | Validation Loss: 1.8882e-01
Micro Training F1: 0.9584 | Micro Validation F1: 0.9444
Macro Training F1: 0.8506 | Macro Validation F1: 0.8037
Epoch 20 of 100 | Learning rate 1.000e-03
Training Loss: 1.1653e-01 | Validation Loss: 1.5179e-01
Micro Training F1: 0.9647 | Micro Validation F1: 0.9556
Macro Training F1: 0.8804 | Macro Validation F1: 0.8404
Epoch 30 of 100 | Learning rate 1.000e-03
Training Loss: 9.8063e-02 | Validation Loss: 1.4524e-01
Micro Training F1: 0.9696 | Micro Validation F1: 0.9575
Macro Training F1: 0.8976 | Macro Validation F1: 0.8513
Epoch 40 of 100 | Learning rate 1.000e-03
Training Loss: 9.0293e-02 | Validation Loss: 1.4438e-01
Micro Training F1: 0.9718 | Micro Validation F1: 0.9556
Macro Training F1: 0.9057 | Macro Validation F1: 0.8518
Epoch 50 of 100 | Learning rate 1.000e-03
Training Loss: 8.6558e-02 | Validation Loss: 1.3443e-01
Micro Training F1: 0.9733 | Micro Validation F1: 0.9601
Macro Training F1: 0.9085 | Macro Validation F1: 0.8629
Epoch 60 of 100 | Learning rate 1.000e-03
Training Loss: 7.6104e-02 | Validation Loss: 1.3575e-01
Micro Training F1: 0.9756 | Micro Validation F1: 0.9629
Macro Training F1: 0.9182 | Macro Validation F1: 0.8675
Epoch 70 of 100 | Learning rate 1.000e-03
Training Loss: 7.1601e-02 | Validation Loss: 1.4681e-01
Micro Training F1: 0.9768 | Micro Validation F1: 0.9589
Macro Training F1: 0.9217 | Macro Validation F1: 0.8588
Epoch 80 of 100 | Learning rate 1.000e-03
Training Loss: 6.3806e-02 | Validation Loss: 1.3554e-01
Micro Training F1: 0.9792 | Micro Validation F1: 0.9613
Macro Training F1: 0.9299 | Macro Validation F1: 0.8666
Epoch 90 of 100 | Learning rate 1.000e-03
Training Loss: 6.1994e-02 | Validation Loss: 1.3635e-01
Micro Training F1: 0.9796 | Micro Validation F1: 0.9636
Macro Training F1: 0.9311 | Macro Validation F1: 0.8716
Epoch 100 of 100 | Learning rate 1.000e-03
Training Loss: 5.8922e-02 | Validation Loss: 1.4224e-01
Micro Training F1: 0.9803 | Micro Validation F1: 0.9649
Macro Training F1: 0.9342 | Macro Validation F1: 0.8712
Network evaluation¶
Select networks based on performance to build a conformal estimator.
PyMSDtorch conformal operations and documentation can be found in pyMSDtorch/core/conformalize directory.
[12]:
sel = np.where(np.array(performance) > 0.78 )[0]
these_nets = []
for ii in sel:
these_nets.append(nets[ii])
bagged_model = baggins.model_baggin(these_nets,"classification", False)
conf_obj = conformalize_segmentation.build_conformalizer_classify(bagged_model,
test_loader,
alpha=0.10,
missing_label=-1,
device='cuda:0',
norma=True)
Conformal estimation¶
In conformal estimation, we need to decide upon a confidence limit alpha. If desired, the parameter alpha can be changed. The lower it gets, the more ‘noise’ is included in the conformal set. We will set this value at 5% for now, and choose to select all pixels that has a ‘vein’ classification in their set as a possible ‘vein’ pixel.
[13]:
alpha = 0.10
conf_obj.recalibrate(alpha)
conformal_set = conf_obj(bagged_model(images[0:1]))
possible_veins = conformalize_segmentation.has_label_in_set(conformal_set,1)
mean_p, std_p = bagged_model(images[0:1], 'cuda:0', True)
View results¶
[14]:
params = {}
params["title"]="Image and Labels - Ground Truth"
img = images[0].permute(1,2,0).numpy()
lbl = labels[0,0].numpy()
fig = plots.plot_rgb_and_labels(img, lbl, params)
fig.update_layout(width=700)
fig.show()
params["title"]="Image and Labels - Estimated labels (conformal alpha = %3.2f )"%alpha
img = images[0].permute(1,2,0).numpy()
lbl = labels[0,0].numpy()
fig = plots.plot_rgb_and_labels(img, possible_veins.numpy()[0], params)
fig.update_layout(width=700)
fig.show()
params["title"]="Image and Class Probability Map"
img = images[0].permute(1,2,0).numpy()
lbl = labels[0,0].numpy()
fig = plots.plot_rgb_and_labels(img, mean_p.numpy()[0,1], params)
fig.update_layout(width=700)
fig.show()
params["title"]="Image and Uncertainty of Estimated labels"
img = images[0].permute(1,2,0).numpy()
lbl = labels[0,0].numpy()
fig = plots.plot_rgb_and_labels(img, std_p.numpy()[0,1], params)
fig.update_layout(width=700)
fig.show()
[15]:
F1_score_labels = train_scripts.segmentation_metrics(mean_p, labels[0:1,0,...].type(torch.LongTensor))
print( "Micro F1: %5.4f"%F1_score_labels[0].item())
print( "Macro F1: %5.4f"%F1_score_labels[1].item())
Micro F1: 0.9656
Macro F1: 0.8810
Latent Space Exploration with UMap and Randomized Sparse Mixed Scale Autoencoders¶
Authors: Eric Roberts and Petrus Zwart
E-mail: PHZwart@lbl.gov, EJRoberts@lbl.gov ___
This notebook highlights some basic functionality with the pyMSDtorch package.
We will setup randomly connected, sparse mixed-scale inspired autoencoders with for unsupervised learning, with the goal of exploring the latent space it generates. These autoencoders deploy random sparsely connected convolutions and random downsampling/upsampling operations (maxpooling/transposed convolutions) for compressing/expanding data in the encoder/decoder halves. This random layout supplants the structured order of typical Autoencoders, which consist downsampling/upsampling operations following dual convolutions.
Like the preceding sparse mixed-scale networks (SMSNets), there exist a number of hyperparameters to tweak so we can control the number of learnable parameters these sparsely connected networks contain. This type of control can be beneficial when the amount of data on which one can train a network is not very voluminous, as it allows for better handles on overfitting. ___
[1]:
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from pyMSDtorch.core import helpers
from pyMSDtorch.core import train_scripts
from pyMSDtorch.core.networks import SparseNet
from pyMSDtorch.test_data.twoD import random_shapes
from pyMSDtorch.core.utils import latent_space_viewer
from pyMSDtorch.viz_tools import plots, draw_sparse_network
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader, TensorDataset
import einops
import umap
Create Data¶
Using our pyMSDtorch in-house data generator, we produce a number of noisy “shapes” images consisting of single triangles, rectangles, circles, and donuts/annuli, each assigned a different class. In addition to augmenting with random orientations and sizes, each raw, ground truth image will be bundled with its corresponding noisy and binary mask.
Parameters to toggle:¶
n_train – number of ground truth/noisy/label image bundles to generate for training
n_test – number of ground truth/noisy/label image bundles to generate for testing
noise_level – per-pixel noise drawn from a continuous uniform distribution (cut-off above at 1)
N_xy – size of individual images
[2]:
N_train = 500
N_test = 15000
noise_level = 0.50
Nxy = 32
train_data = random_shapes.build_random_shape_set_numpy(n_imgs=N_train,
noise_level=noise_level,
n_xy=Nxy)
test_data = random_shapes.build_random_shape_set_numpy(n_imgs=N_test,
noise_level=noise_level,
n_xy=Nxy)
test_GT = torch.Tensor(test_data["GroundTruth"]).unsqueeze(1)
View shapes data¶
[3]:
plots.plot_shapes_data_numpy(train_data)
Dataloader class¶
Here we cast all images from numpy arrays and the PyTorch Dataloader class for easy handling and iterative loading of data into the networks and models.
[4]:
which_one = "Noisy" #"GroundTruth"
batch_size = 100
loader_params = {'batch_size': batch_size,
'shuffle': True}
Ttrain_data = TensorDataset( torch.Tensor(train_data[which_one]).unsqueeze(1) )
train_loader = DataLoader(Ttrain_data, **loader_params)
loader_params = {'batch_size': batch_size,
'shuffle': False}
Ttest_data = TensorDataset( torch.Tensor(test_data[which_one][0:N_train]).unsqueeze(1) )
test_loader = DataLoader(Ttest_data, **loader_params)
Tdemo_data = TensorDataset( torch.Tensor(test_data[which_one]).unsqueeze(1) )
demo_loader = DataLoader(Tdemo_data, **loader_params)
Build Autoencoder¶
There are a number of parameters to play with that impact the size of the network:
latent_shape: the spatial footprint of the image in latent space. I don’t recommend going below 4x4, because it interferes with the dilation choices. This is a bit of a bug, we need to fix that. Its on the list.
out_channels: the number of channels of the latent image. Determines the dimension of latent space: (channels,latent_shape[-2], latent_shape[-1])
depth: the depth of the random sparse convolutional encoder / decoder
hidden channels: The number of channels put out per convolution.
max_degree / min_degree : This determines how many connections you have per node.
Other parameters do not impact the size of the network dramatically / at all:
in_shape: determined by the input shape of the image.
dilations: the maximum dilation should not exceed the smallest image dimension.
alpha_range: determines the type of graphs (wide vs skinny). When alpha is large,the chances for skinny graphs to be generated increases. We don’t know which parameter choice is best, so we randomize it’s choice.
gamma_range: no effect unless the maximum degree and min_degree are far apart. We don’t know which parameter choice is best, so we randomize it’s choice.
pIL, pLO, IO: keep as is.
stride_base: make sure your latent image size can be generated from the in_shape by repeated division of with this number.
[19]:
autoencoder = SparseNet.SparseAutoEncoder(in_shape=(32, 32),
latent_shape=(4, 4),
depth=20,
dilations=[1,2,3],
hidden_channels=4,
out_channels=1,
alpha_range=(0.05, 0.25),
gamma_range=(0.0, 0.5),
max_degree=10, min_degree=4,
pIL=0.15,
pLO=0.15,
IO=False,
stride_base=2)
pytorch_total_params = helpers.count_parameters(autoencoder)
print( "Number of parameters:", pytorch_total_params)
Number of parameters: 89451
We visualize the layout of connections in the encoder half of the Autoencoder, the first half responsible for the lower-dimensional compression of the data in the latent space.
[20]:
ne,de,ce = draw_sparse_network.draw_network(autoencoder.encode)



Now the visualization of connections comprising the decoder half of the Autoencoder. This second half is responsible for reconstructing the exact image input from the compressed information in the latent space
[21]:
nd,dd,cd = draw_sparse_network.draw_network(autoencoder.decode)



Training the Autoencoder¶
Training hyperparameters are specified.
[ ]:
torch.cuda.empty_cache() # Empty superfluous information from GPU memory
learning_rate = 1e-3
num_epochs=25
criterion = nn.L1Loss()
optimizer = optim.Adam(autoencoder.parameters(), lr=learning_rate)
[22]:
torch.cuda.empty_cache()
learning_rate = 1e-3
num_epochs=25
criterion = nn.L1Loss()
optimizer = optim.Adam(autoencoder.parameters(), lr=learning_rate)
rv = train_scripts.train_autoencoder(net=autoencoder.to('cuda:0'),
trainloader=train_loader,
validationloader=test_loader,
NUM_EPOCHS=num_epochs,
criterion=criterion,
optimizer=optimizer,
device="cuda:0", show=1)
print("Best Performance:", rv[1]["CC validation"][rv[1]['Best model index']])
Epoch 1 of 25 | Learning rate 1.000e-03
Training Loss: 4.2694e-01 | Validation Loss: 3.8143e-01
Training CC: -0.0337 Validation CC : -0.0311
Epoch 2 of 25 | Learning rate 1.000e-03
Training Loss: 3.7770e-01 | Validation Loss: 3.4278e-01
Training CC: 0.0115 Validation CC : 0.0710
Epoch 3 of 25 | Learning rate 1.000e-03
Training Loss: 3.4378e-01 | Validation Loss: 3.1645e-01
Training CC: 0.1143 Validation CC : 0.1622
Epoch 4 of 25 | Learning rate 1.000e-03
Training Loss: 3.1984e-01 | Validation Loss: 2.9729e-01
Training CC: 0.2002 Validation CC : 0.2451
Epoch 5 of 25 | Learning rate 1.000e-03
Training Loss: 3.0146e-01 | Validation Loss: 2.8159e-01
Training CC: 0.2832 Validation CC : 0.3263
Epoch 6 of 25 | Learning rate 1.000e-03
Training Loss: 2.8595e-01 | Validation Loss: 2.6843e-01
Training CC: 0.3553 Validation CC : 0.3836
Epoch 7 of 25 | Learning rate 1.000e-03
Training Loss: 2.7259e-01 | Validation Loss: 2.5695e-01
Training CC: 0.4126 Validation CC : 0.4439
Epoch 8 of 25 | Learning rate 1.000e-03
Training Loss: 2.6081e-01 | Validation Loss: 2.4615e-01
Training CC: 0.4717 Validation CC : 0.4990
Epoch 9 of 25 | Learning rate 1.000e-03
Training Loss: 2.4975e-01 | Validation Loss: 2.3617e-01
Training CC: 0.5263 Validation CC : 0.5514
Epoch 10 of 25 | Learning rate 1.000e-03
Training Loss: 2.3939e-01 | Validation Loss: 2.2670e-01
Training CC: 0.5776 Validation CC : 0.5999
Epoch 11 of 25 | Learning rate 1.000e-03
Training Loss: 2.2954e-01 | Validation Loss: 2.1779e-01
Training CC: 0.6233 Validation CC : 0.6419
Epoch 12 of 25 | Learning rate 1.000e-03
Training Loss: 2.1994e-01 | Validation Loss: 2.0904e-01
Training CC: 0.6645 Validation CC : 0.6801
Epoch 13 of 25 | Learning rate 1.000e-03
Training Loss: 2.1043e-01 | Validation Loss: 2.0099e-01
Training CC: 0.7017 Validation CC : 0.7139
Epoch 14 of 25 | Learning rate 1.000e-03
Training Loss: 2.0229e-01 | Validation Loss: 1.9438e-01
Training CC: 0.7325 Validation CC : 0.7398
Epoch 15 of 25 | Learning rate 1.000e-03
Training Loss: 1.9486e-01 | Validation Loss: 1.8969e-01
Training CC: 0.7568 Validation CC : 0.7586
Epoch 16 of 25 | Learning rate 1.000e-03
Training Loss: 1.8991e-01 | Validation Loss: 1.8624e-01
Training CC: 0.7744 Validation CC : 0.7740
Epoch 17 of 25 | Learning rate 1.000e-03
Training Loss: 1.8582e-01 | Validation Loss: 1.8289e-01
Training CC: 0.7885 Validation CC : 0.7875
Epoch 18 of 25 | Learning rate 1.000e-03
Training Loss: 1.8229e-01 | Validation Loss: 1.7995e-01
Training CC: 0.8005 Validation CC : 0.7982
Epoch 19 of 25 | Learning rate 1.000e-03
Training Loss: 1.7943e-01 | Validation Loss: 1.7751e-01
Training CC: 0.8101 Validation CC : 0.8064
Epoch 20 of 25 | Learning rate 1.000e-03
Training Loss: 1.7690e-01 | Validation Loss: 1.7544e-01
Training CC: 0.8176 Validation CC : 0.8127
Epoch 21 of 25 | Learning rate 1.000e-03
Training Loss: 1.7452e-01 | Validation Loss: 1.7337e-01
Training CC: 0.8236 Validation CC : 0.8181
Epoch 22 of 25 | Learning rate 1.000e-03
Training Loss: 1.7302e-01 | Validation Loss: 1.7147e-01
Training CC: 0.8283 Validation CC : 0.8227
Epoch 23 of 25 | Learning rate 1.000e-03
Training Loss: 1.7083e-01 | Validation Loss: 1.6990e-01
Training CC: 0.8332 Validation CC : 0.8273
Epoch 24 of 25 | Learning rate 1.000e-03
Training Loss: 1.6946e-01 | Validation Loss: 1.6901e-01
Training CC: 0.8374 Validation CC : 0.8306
Epoch 25 of 25 | Learning rate 1.000e-03
Training Loss: 1.6766e-01 | Validation Loss: 1.6728e-01
Training CC: 0.8411 Validation CC : 0.8342
Best Performance: 0.8341745018959046
Latent space exploration¶
With the full SMSNet-Autoencoder trained, we pass new testing data through the encoder-half and apply Uniform Manifold Approximation and Projection (UMap), a nonlinear dimensionality reduction technique leveraging topological structures.
Test images previously unseen by the network are shown in their latent space representation.
[23]:
results = []
latent = []
for batch in demo_loader:
with torch.no_grad():
res = autoencoder(batch[0].to("cuda:0"))
lt = autoencoder.latent_vector(batch[0].to("cuda:0"))
results.append(res.cpu())
latent.append(lt.cpu())
results = torch.cat(results, dim=0)
latent = torch.cat(latent, dim=0)
[24]:
for ii,jj in zip(test_GT.numpy()[0:5],results.numpy()[0:5]):
fig, axs = plt.subplots(1,2)
im00 = axs[0].imshow(ii[0,...])
im01 = axs[1].imshow(jj[0])
plt.colorbar(im00,ax=axs[0], shrink=0.45)
plt.colorbar(im01,ax=axs[1], shrink=0.45)
plt.show()
print("-----------------")

-----------------

-----------------

-----------------

-----------------

-----------------
Autoencoder latent space is further reduced down to two dimensions; i.e. each image passed though the encoder is represented by an integer pair of coordinates below, with blue repesenting all rectangles, orange representing all circles/discs, green representing all triangles, and red representing all annuli.
[25]:
umapper = umap.UMAP(min_dist=0, n_neighbors=35)
X = umapper.fit_transform(latent.numpy())
[26]:
these_labels = test_data["Label"]
plt.figure(figsize=(8,8))
for lbl in [1,2,3,4]:
sel = these_labels==lbl
plt.plot(X[sel,0], X[sel,1], '.', markersize=2)
plt.legend(["Rectangles","Discs","Triangles","Annuli"])
plt.show()

Below simply averages all nearest-neighbors for visualization purposed.
[27]:
fig = latent_space_viewer.build_latent_space_image_viewer(test_data["GroundTruth"],
X,
n_bins=50,
min_count=1,
max_count=1,
mode="mean")

[ ]:
Latent Space Exploration with Randomized Sparse Mixed Scale Autoencoders, regularized by the availability of image labels¶
Authors: Eric Roberts and Petrus Zwart
E-mail: PHZwart@lbl.gov, EJRoberts@lbl.gov ___
This notebook highlights some basic functionality with the pyMSDtorch package.
In this notebook we setup autoencoders, with the goal to explore the latent space it generates. In this case however, we will guide the formation of the latent space by including labels to specific images.
The autoencoders we use are based on randomly construct convolutional neural networks in which we can control the number of parameters it contains. This type of control can be beneficial when the amount of data on which one can train a network is not very voluminous, as it allows for better handles on overfitting.
The constructed latent space can be used for unsupervised and supervised exploration methods. In our limited experience, the classifiers that are trained come out of the data are reasonable, but can be improved upon using classic classification methods, as shown further.
[1]:
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from pyMSDtorch.core import helpers
from pyMSDtorch.core import train_scripts
from pyMSDtorch.core.networks import SparseNet
from pyMSDtorch.test_data.twoD import random_shapes
from pyMSDtorch.core.utils import latent_space_viewer
from pyMSDtorch.viz_tools import plots
from pyMSDtorch.viz_tools import plot_autoencoder_image_classification as paic
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader, TensorDataset
import einops
import umap
<frozen importlib._bootstrap>:219: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
Create some data first
[2]:
N_train = 500
N_labeled = 100
N_test = 500
noise_level = 0.150
Nxy = 32
train_data = random_shapes.build_random_shape_set_numpy(n_imgs=N_train,
noise_level=noise_level,
n_xy=Nxy)
test_data = random_shapes.build_random_shape_set_numpy(n_imgs=N_test,
noise_level=noise_level,
n_xy=Nxy)
[3]:
plots.plot_shapes_data_numpy(train_data)
[4]:
which_one = "Noisy" #"GroundTruth"
batch_size = 100
loader_params = {'batch_size': batch_size,
'shuffle': True}
train_imgs = torch.Tensor(train_data[which_one]).unsqueeze(1)
train_labels = torch.Tensor(train_data["Label"]).unsqueeze(1)-1
train_labels[N_labeled:]=-1 # remove some labels to highlight 'mixed' training
Ttrain_data = TensorDataset(train_imgs,train_labels)
train_loader = DataLoader(Ttrain_data, **loader_params)
loader_params = {'batch_size': batch_size,
'shuffle': False}
test_images = torch.Tensor(test_data[which_one]).unsqueeze(1)
test_labels = torch.Tensor(test_data["Label"]).unsqueeze(1)-1
Ttest_data = TensorDataset( test_images, test_labels )
test_loader = DataLoader(Ttest_data, **loader_params)
Lets build an autoencoder first.
There are a number of parameters to play with that impact the size of the network:
- latent_shape: the spatial footprint of the image in latent space.
I don't recommend going below 4x4, because it interferes with the
dilation choices. This is a bit of a annoyiong feature, we need to fix that.
Its on the list.
- out_channels: the number of channels of the latent image. Determines the
dimension of latent space: (channels,latent_shape[-2], latent_shape[-1])
- depth: the depth of the random sparse convolutional encoder / decoder
- hidden channels: The number of channels put out per convolution.
- max_degree / min_degree : This determines how many connections you have per node.
Other parameters do not impact the size of the network dramatically / at all:
- in_shape: determined by the input shape of the image.
- dilations: the maximum dilation should not exceed the smallest image dimension.
- alpha_range: determines the type of graphs (wide vs skinny). When alpha is large,
the chances for skinny graphs to be generated increases.
We don't know which parameter choice is best, so we randomize it's choice.
- gamma_range: no effect unless the maximum degree and min_degree are far apart.
We don't know which parameter choice is best, so we randomize it's choice.
- pIL,pLO,IO: keep as is.
- stride_base: make sure your latent image size can be generated from the in_shape
by repeated division of with this number.
For the classification, specify the number of output classes. Here we work with 4 shapes, so set it to 4. The dropout rate governs the dropout layers in the classifier part of the networks and doesn’t affect the autoencoder part.
[5]:
autoencoder = SparseNet.SparseAEC(in_shape=(32, 32),
latent_shape=(4, 4),
out_classes=4,
depth=40,
dilations=[1,2,3],
hidden_channels=3,
out_channels=2,
alpha_range=(0.5, 1.0),
gamma_range=(0.0, 0.5),
max_degree=10, min_degree=10,
pIL=0.15,
pLO=0.15,
IO=False,
stride_base=2,
dropout_rate=0.05,)
pytorch_total_params = helpers.count_parameters(autoencoder)
print( "Number of parameters:", pytorch_total_params)
Number of parameters: 372778
We define two optimizers, one for autoencoding and one for classification. They will be minimized consequetively instead of building a single sum of targets. This avoids choosing the right weight. The mini-epochs are the number of epochs it passes over the whole data set to optimize a single atrget function. The autoencoder is done first.
[6]:
torch.cuda.empty_cache()
learning_rate = 1e-3
num_epochs=50
criterion_AE = nn.MSELoss()
optimizer_AE = optim.Adam(autoencoder.parameters(), lr=learning_rate)
criterion_label = nn.CrossEntropyLoss(ignore_index=-1)
optimizer_label = optim.Adam(autoencoder.parameters(), lr=learning_rate)
rv = train_scripts.autoencode_and_classify_training(net=autoencoder.to('cuda:0'),
trainloader=train_loader,
validationloader=test_loader,
macro_epochs=num_epochs,
mini_epochs=5,
criteria_autoencode=criterion_AE,
minimizer_autoencode=optimizer_AE,
criteria_classify=criterion_label,
minimizer_classify=optimizer_label,
device="cuda:0",
show=1,
clip_value=100.0)
Epoch 1, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5833e-01 | Validation Loss : 1.9714e-01
Training CC : 0.1946 | Validation CC : 0.4514
** Classification Losses **
Training Loss : 1.4542e+00 | Validation Loss : 1.4837e+00
Training F1 Macro: 0.1412 | Validation F1 Macro : 0.1729
Training F1 Micro: 0.2659 | Validation F1 Micro : 0.2720
Epoch 1, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.7494e-01 | Validation Loss : 1.3841e-01
Training CC : 0.5357 | Validation CC : 0.6173
** Classification Losses **
Training Loss : 1.4567e+00 | Validation Loss : 1.5082e+00
Training F1 Macro: 0.1078 | Validation F1 Macro : 0.1527
Training F1 Micro: 0.2518 | Validation F1 Micro : 0.2580
Epoch 1, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.2557e-01 | Validation Loss : 1.0217e-01
Training CC : 0.6537 | Validation CC : 0.6885
** Classification Losses **
Training Loss : 1.5433e+00 | Validation Loss : 1.5206e+00
Training F1 Macro: 0.1070 | Validation F1 Macro : 0.1454
Training F1 Micro: 0.2443 | Validation F1 Micro : 0.2580
Epoch 1, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.3322e-02 | Validation Loss : 7.6507e-02
Training CC : 0.7179 | Validation CC : 0.7457
** Classification Losses **
Training Loss : 1.5297e+00 | Validation Loss : 1.5298e+00
Training F1 Macro: 0.1070 | Validation F1 Macro : 0.1338
Training F1 Micro: 0.2436 | Validation F1 Micro : 0.2480
Epoch 1, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.9479e-02 | Validation Loss : 5.7264e-02
Training CC : 0.7750 | Validation CC : 0.8002
** Classification Losses **
Training Loss : 1.5637e+00 | Validation Loss : 1.5268e+00
Training F1 Macro: 0.0887 | Validation F1 Macro : 0.1491
Training F1 Micro: 0.2143 | Validation F1 Micro : 0.2560
Epoch 2, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.9107e-02 | Validation Loss : 6.3066e-02
Training CC : 0.8004 | Validation CC : 0.7721
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.4098e+00 | Validation Loss : 1.2859e+00
Training F1 Macro: 0.1368 | Validation F1 Macro : 0.2642
Training F1 Micro: 0.2651 | Validation F1 Micro : 0.3360
Epoch 2, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.6844e-02 | Validation Loss : 7.0438e-02
Training CC : 0.7638 | Validation CC : 0.7362
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1472e+00 | Validation Loss : 1.1916e+00
Training F1 Macro: 0.4144 | Validation F1 Macro : 0.3684
Training F1 Micro: 0.4584 | Validation F1 Micro : 0.4100
Epoch 2, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.1455e-02 | Validation Loss : 7.2273e-02
Training CC : 0.7415 | Validation CC : 0.7272
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0518e+00 | Validation Loss : 1.1435e+00
Training F1 Macro: 0.5795 | Validation F1 Macro : 0.4515
Training F1 Micro: 0.5803 | Validation F1 Micro : 0.4740
Epoch 2, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.3640e-02 | Validation Loss : 7.4967e-02
Training CC : 0.7305 | Validation CC : 0.7132
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.1016e-01 | Validation Loss : 1.0982e+00
Training F1 Macro: 0.6049 | Validation F1 Macro : 0.5148
Training F1 Micro: 0.6839 | Validation F1 Micro : 0.5260
Epoch 2, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.5467e-02 | Validation Loss : 7.6074e-02
Training CC : 0.7207 | Validation CC : 0.7064
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.3661e-01 | Validation Loss : 1.0611e+00
Training F1 Macro: 0.7700 | Validation F1 Macro : 0.5408
Training F1 Micro: 0.7678 | Validation F1 Micro : 0.5460
Epoch 3, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.2492e-02 | Validation Loss : 4.8532e-02
Training CC : 0.7748 | Validation CC : 0.8198
** Classification Losses **
Training Loss : 8.0198e-01 | Validation Loss : 1.1293e+00
Training F1 Macro: 0.7082 | Validation F1 Macro : 0.4957
Training F1 Micro: 0.7186 | Validation F1 Micro : 0.5080
Epoch 3, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.3980e-02 | Validation Loss : 3.8473e-02
Training CC : 0.8411 | Validation CC : 0.8520
** Classification Losses **
Training Loss : 9.4166e-01 | Validation Loss : 1.1996e+00
Training F1 Macro: 0.5814 | Validation F1 Macro : 0.3834
Training F1 Micro: 0.6267 | Validation F1 Micro : 0.4200
Epoch 3, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5708e-02 | Validation Loss : 3.3619e-02
Training CC : 0.8654 | Validation CC : 0.8676
** Classification Losses **
Training Loss : 1.0400e+00 | Validation Loss : 1.2227e+00
Training F1 Macro: 0.4950 | Validation F1 Macro : 0.3565
Training F1 Micro: 0.5396 | Validation F1 Micro : 0.4020
Epoch 3, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1166e-02 | Validation Loss : 3.1066e-02
Training CC : 0.8803 | Validation CC : 0.8788
** Classification Losses **
Training Loss : 1.1217e+00 | Validation Loss : 1.2409e+00
Training F1 Macro: 0.4504 | Validation F1 Macro : 0.3344
Training F1 Micro: 0.4988 | Validation F1 Micro : 0.3880
Epoch 3, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8666e-02 | Validation Loss : 2.9292e-02
Training CC : 0.8904 | Validation CC : 0.8869
** Classification Losses **
Training Loss : 1.0885e+00 | Validation Loss : 1.2616e+00
Training F1 Macro: 0.4854 | Validation F1 Macro : 0.3249
Training F1 Micro: 0.5238 | Validation F1 Micro : 0.3840
Epoch 4, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7689e-02 | Validation Loss : 3.1067e-02
Training CC : 0.8944 | Validation CC : 0.8795
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1196e+00 | Validation Loss : 1.1489e+00
Training F1 Macro: 0.4359 | Validation F1 Macro : 0.4688
Training F1 Micro: 0.5154 | Validation F1 Micro : 0.4900
Epoch 4, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1510e-02 | Validation Loss : 3.6835e-02
Training CC : 0.8790 | Validation CC : 0.8557
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.0130e-01 | Validation Loss : 1.0439e+00
Training F1 Macro: 0.7237 | Validation F1 Macro : 0.5512
Training F1 Micro: 0.7267 | Validation F1 Micro : 0.5600
Epoch 4, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6501e-02 | Validation Loss : 3.8769e-02
Training CC : 0.8585 | Validation CC : 0.8474
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.4278e-01 | Validation Loss : 1.0212e+00
Training F1 Macro: 0.8133 | Validation F1 Macro : 0.5642
Training F1 Micro: 0.8138 | Validation F1 Micro : 0.5640
Epoch 4, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.7923e-02 | Validation Loss : 3.9381e-02
Training CC : 0.8527 | Validation CC : 0.8445
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.4941e-01 | Validation Loss : 9.9847e-01
Training F1 Macro: 0.8295 | Validation F1 Macro : 0.5819
Training F1 Micro: 0.8402 | Validation F1 Micro : 0.5800
Epoch 4, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9013e-02 | Validation Loss : 4.0582e-02
Training CC : 0.8479 | Validation CC : 0.8393
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.9050e-01 | Validation Loss : 9.9734e-01
Training F1 Macro: 0.7857 | Validation F1 Macro : 0.5757
Training F1 Micro: 0.7938 | Validation F1 Micro : 0.5860
Epoch 5, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3743e-02 | Validation Loss : 3.1420e-02
Training CC : 0.8697 | Validation CC : 0.8789
** Classification Losses **
Training Loss : 6.3096e-01 | Validation Loss : 1.0156e+00
Training F1 Macro: 0.8209 | Validation F1 Macro : 0.5716
Training F1 Micro: 0.8429 | Validation F1 Micro : 0.5840
Epoch 5, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8137e-02 | Validation Loss : 2.8415e-02
Training CC : 0.8927 | Validation CC : 0.8895
** Classification Losses **
Training Loss : 6.2693e-01 | Validation Loss : 1.0242e+00
Training F1 Macro: 0.8346 | Validation F1 Macro : 0.5789
Training F1 Micro: 0.8430 | Validation F1 Micro : 0.5900
Epoch 5, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5760e-02 | Validation Loss : 2.6215e-02
Training CC : 0.9017 | Validation CC : 0.8982
** Classification Losses **
Training Loss : 7.1689e-01 | Validation Loss : 1.0248e+00
Training F1 Macro: 0.8085 | Validation F1 Macro : 0.5630
Training F1 Micro: 0.8148 | Validation F1 Micro : 0.5760
Epoch 5, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3933e-02 | Validation Loss : 2.4485e-02
Training CC : 0.9091 | Validation CC : 0.9046
** Classification Losses **
Training Loss : 7.3269e-01 | Validation Loss : 1.0355e+00
Training F1 Macro: 0.8173 | Validation F1 Macro : 0.5564
Training F1 Micro: 0.8179 | Validation F1 Micro : 0.5680
Epoch 5, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2497e-02 | Validation Loss : 2.3630e-02
Training CC : 0.9148 | Validation CC : 0.9083
** Classification Losses **
Training Loss : 7.4537e-01 | Validation Loss : 1.0341e+00
Training F1 Macro: 0.8470 | Validation F1 Macro : 0.5483
Training F1 Micro: 0.8414 | Validation F1 Micro : 0.5560
Epoch 6, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1940e-02 | Validation Loss : 2.4573e-02
Training CC : 0.9168 | Validation CC : 0.9045
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.0182e-01 | Validation Loss : 9.8734e-01
Training F1 Macro: 0.8343 | Validation F1 Macro : 0.5737
Training F1 Micro: 0.8300 | Validation F1 Micro : 0.5840
Epoch 6, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3738e-02 | Validation Loss : 2.6494e-02
Training CC : 0.9096 | Validation CC : 0.8966
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.7709e-01 | Validation Loss : 9.7654e-01
Training F1 Macro: 0.8604 | Validation F1 Macro : 0.5669
Training F1 Micro: 0.8608 | Validation F1 Micro : 0.5740
Epoch 6, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5391e-02 | Validation Loss : 2.7516e-02
Training CC : 0.9030 | Validation CC : 0.8923
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.5142e-01 | Validation Loss : 9.4373e-01
Training F1 Macro: 0.7920 | Validation F1 Macro : 0.6263
Training F1 Micro: 0.8294 | Validation F1 Micro : 0.6320
Epoch 6, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6273e-02 | Validation Loss : 2.8650e-02
Training CC : 0.8990 | Validation CC : 0.8876
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.3247e-01 | Validation Loss : 9.5497e-01
Training F1 Macro: 0.8407 | Validation F1 Macro : 0.6279
Training F1 Micro: 0.8674 | Validation F1 Micro : 0.6220
Epoch 6, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7523e-02 | Validation Loss : 2.9696e-02
Training CC : 0.8941 | Validation CC : 0.8833
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0847e-01 | Validation Loss : 9.5333e-01
Training F1 Macro: 0.9402 | Validation F1 Macro : 0.6296
Training F1 Micro: 0.9391 | Validation F1 Micro : 0.6200
Epoch 7, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5244e-02 | Validation Loss : 2.4720e-02
Training CC : 0.9038 | Validation CC : 0.9037
** Classification Losses **
Training Loss : 3.8517e-01 | Validation Loss : 9.4221e-01
Training F1 Macro: 0.9235 | Validation F1 Macro : 0.6441
Training F1 Micro: 0.9294 | Validation F1 Micro : 0.6440
Epoch 7, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2288e-02 | Validation Loss : 2.2985e-02
Training CC : 0.9157 | Validation CC : 0.9109
** Classification Losses **
Training Loss : 4.7860e-01 | Validation Loss : 9.3218e-01
Training F1 Macro: 0.8608 | Validation F1 Macro : 0.6367
Training F1 Micro: 0.9125 | Validation F1 Micro : 0.6400
Epoch 7, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.0640e-02 | Validation Loss : 2.2053e-02
Training CC : 0.9219 | Validation CC : 0.9149
** Classification Losses **
Training Loss : 4.6922e-01 | Validation Loss : 9.5766e-01
Training F1 Macro: 0.9025 | Validation F1 Macro : 0.5948
Training F1 Micro: 0.9113 | Validation F1 Micro : 0.6040
Epoch 7, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.9462e-02 | Validation Loss : 2.0969e-02
Training CC : 0.9267 | Validation CC : 0.9191
** Classification Losses **
Training Loss : 4.8103e-01 | Validation Loss : 9.5177e-01
Training F1 Macro: 0.8614 | Validation F1 Macro : 0.6067
Training F1 Micro: 0.8945 | Validation F1 Micro : 0.6160
Epoch 7, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.8389e-02 | Validation Loss : 2.0196e-02
Training CC : 0.9309 | Validation CC : 0.9225
** Classification Losses **
Training Loss : 5.3990e-01 | Validation Loss : 9.8757e-01
Training F1 Macro: 0.8741 | Validation F1 Macro : 0.6020
Training F1 Micro: 0.8875 | Validation F1 Micro : 0.6060
Epoch 8, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.7780e-02 | Validation Loss : 2.0791e-02
Training CC : 0.9331 | Validation CC : 0.9201
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8074e-01 | Validation Loss : 9.3555e-01
Training F1 Macro: 0.8894 | Validation F1 Macro : 0.6286
Training F1 Micro: 0.8711 | Validation F1 Micro : 0.6280
Epoch 8, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.8955e-02 | Validation Loss : 2.2192e-02
Training CC : 0.9284 | Validation CC : 0.9145
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1271e-01 | Validation Loss : 9.3977e-01
Training F1 Macro: 0.8991 | Validation F1 Macro : 0.6416
Training F1 Micro: 0.9057 | Validation F1 Micro : 0.6340
Epoch 8, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.0248e-02 | Validation Loss : 2.2897e-02
Training CC : 0.9233 | Validation CC : 0.9116
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6204e-01 | Validation Loss : 9.2149e-01
Training F1 Macro: 0.9219 | Validation F1 Macro : 0.6617
Training F1 Micro: 0.9205 | Validation F1 Micro : 0.6540
Epoch 8, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1133e-02 | Validation Loss : 2.3551e-02
Training CC : 0.9201 | Validation CC : 0.9089
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3710e-01 | Validation Loss : 9.3359e-01
Training F1 Macro: 0.9039 | Validation F1 Macro : 0.6330
Training F1 Micro: 0.9092 | Validation F1 Micro : 0.6280
Epoch 8, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2090e-02 | Validation Loss : 2.4345e-02
Training CC : 0.9166 | Validation CC : 0.9057
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.3895e-01 | Validation Loss : 9.5366e-01
Training F1 Macro: 0.9673 | Validation F1 Macro : 0.6222
Training F1 Micro: 0.9599 | Validation F1 Micro : 0.6180
Epoch 9, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.0796e-02 | Validation Loss : 2.1356e-02
Training CC : 0.9217 | Validation CC : 0.9177
** Classification Losses **
Training Loss : 2.6027e-01 | Validation Loss : 9.2510e-01
Training F1 Macro: 0.9396 | Validation F1 Macro : 0.6262
Training F1 Micro: 0.9318 | Validation F1 Micro : 0.6260
Epoch 9, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.8047e-02 | Validation Loss : 2.0061e-02
Training CC : 0.9319 | Validation CC : 0.9228
** Classification Losses **
Training Loss : 2.2495e-01 | Validation Loss : 9.0182e-01
Training F1 Macro: 0.9634 | Validation F1 Macro : 0.6524
Training F1 Micro: 0.9699 | Validation F1 Micro : 0.6520
Epoch 9, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.6831e-02 | Validation Loss : 1.9079e-02
Training CC : 0.9368 | Validation CC : 0.9269
** Classification Losses **
Training Loss : 2.6530e-01 | Validation Loss : 8.9400e-01
Training F1 Macro: 0.9469 | Validation F1 Macro : 0.6384
Training F1 Micro: 0.9423 | Validation F1 Micro : 0.6420
Epoch 9, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.5871e-02 | Validation Loss : 1.8142e-02
Training CC : 0.9404 | Validation CC : 0.9305
** Classification Losses **
Training Loss : 2.7760e-01 | Validation Loss : 9.0853e-01
Training F1 Macro: 0.9513 | Validation F1 Macro : 0.6179
Training F1 Micro: 0.9489 | Validation F1 Micro : 0.6200
Epoch 9, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.5164e-02 | Validation Loss : 1.7539e-02
Training CC : 0.9436 | Validation CC : 0.9331
** Classification Losses **
Training Loss : 3.1673e-01 | Validation Loss : 9.1956e-01
Training F1 Macro: 0.9232 | Validation F1 Macro : 0.6268
Training F1 Micro: 0.9280 | Validation F1 Micro : 0.6280
Epoch 10, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.4578e-02 | Validation Loss : 1.7977e-02
Training CC : 0.9456 | Validation CC : 0.9313
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.6310e-01 | Validation Loss : 9.4249e-01
Training F1 Macro: 0.9633 | Validation F1 Macro : 0.6326
Training F1 Micro: 0.9600 | Validation F1 Micro : 0.6280
Epoch 10, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.5383e-02 | Validation Loss : 1.8928e-02
Training CC : 0.9423 | Validation CC : 0.9275
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1046e-01 | Validation Loss : 9.0960e-01
Training F1 Macro: 0.8953 | Validation F1 Macro : 0.6527
Training F1 Micro: 0.9013 | Validation F1 Micro : 0.6480
Epoch 10, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.6955e-02 | Validation Loss : 1.9794e-02
Training CC : 0.9373 | Validation CC : 0.9240
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8785e-01 | Validation Loss : 9.0399e-01
Training F1 Macro: 0.9519 | Validation F1 Macro : 0.6345
Training F1 Micro: 0.9552 | Validation F1 Micro : 0.6380
Epoch 10, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.7438e-02 | Validation Loss : 2.0332e-02
Training CC : 0.9346 | Validation CC : 0.9218
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9113e-01 | Validation Loss : 9.2813e-01
Training F1 Macro: 0.9222 | Validation F1 Macro : 0.6069
Training F1 Micro: 0.9236 | Validation F1 Micro : 0.6060
Epoch 10, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.7993e-02 | Validation Loss : 2.0730e-02
Training CC : 0.9326 | Validation CC : 0.9202
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8799e-01 | Validation Loss : 9.0754e-01
Training F1 Macro: 0.9206 | Validation F1 Macro : 0.6161
Training F1 Micro: 0.9205 | Validation F1 Micro : 0.6220
Epoch 11, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.7403e-02 | Validation Loss : 1.8835e-02
Training CC : 0.9364 | Validation CC : 0.9279
** Classification Losses **
Training Loss : 2.6457e-01 | Validation Loss : 8.8831e-01
Training F1 Macro: 0.8917 | Validation F1 Macro : 0.6314
Training F1 Micro: 0.8893 | Validation F1 Micro : 0.6320
Epoch 11, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.5003e-02 | Validation Loss : 1.7768e-02
Training CC : 0.9438 | Validation CC : 0.9321
** Classification Losses **
Training Loss : 1.8082e-01 | Validation Loss : 8.8316e-01
Training F1 Macro: 0.9293 | Validation F1 Macro : 0.6331
Training F1 Micro: 0.9375 | Validation F1 Micro : 0.6380
Epoch 11, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.4159e-02 | Validation Loss : 1.6959e-02
Training CC : 0.9474 | Validation CC : 0.9354
** Classification Losses **
Training Loss : 2.0158e-01 | Validation Loss : 8.6954e-01
Training F1 Macro: 0.9375 | Validation F1 Macro : 0.6380
Training F1 Micro: 0.9474 | Validation F1 Micro : 0.6460
Epoch 11, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.3952e-02 | Validation Loss : 1.6245e-02
Training CC : 0.9492 | Validation CC : 0.9380
** Classification Losses **
Training Loss : 2.5369e-01 | Validation Loss : 8.8022e-01
Training F1 Macro: 0.8668 | Validation F1 Macro : 0.6351
Training F1 Micro: 0.8916 | Validation F1 Micro : 0.6380
Epoch 11, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.2904e-02 | Validation Loss : 1.5950e-02
Training CC : 0.9524 | Validation CC : 0.9396
** Classification Losses **
Training Loss : 1.8340e-01 | Validation Loss : 8.6461e-01
Training F1 Macro: 0.9492 | Validation F1 Macro : 0.6339
Training F1 Micro: 0.9501 | Validation F1 Micro : 0.6380
Epoch 12, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.2211e-02 | Validation Loss : 1.6111e-02
Training CC : 0.9546 | Validation CC : 0.9390
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.2418e-01 | Validation Loss : 8.9275e-01
Training F1 Macro: 0.8881 | Validation F1 Macro : 0.6358
Training F1 Micro: 0.9305 | Validation F1 Micro : 0.6340
Epoch 12, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.2899e-02 | Validation Loss : 1.6743e-02
Training CC : 0.9522 | Validation CC : 0.9365
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.0333e-01 | Validation Loss : 9.3081e-01
Training F1 Macro: 0.9174 | Validation F1 Macro : 0.6288
Training F1 Micro: 0.9166 | Validation F1 Micro : 0.6240
Epoch 12, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.3326e-02 | Validation Loss : 1.7252e-02
Training CC : 0.9503 | Validation CC : 0.9344
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8194e-01 | Validation Loss : 8.7050e-01
Training F1 Macro: 0.9065 | Validation F1 Macro : 0.6398
Training F1 Micro: 0.9100 | Validation F1 Micro : 0.6440
Epoch 12, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.4215e-02 | Validation Loss : 1.8197e-02
Training CC : 0.9469 | Validation CC : 0.9307
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9415e-01 | Validation Loss : 8.7652e-01
Training F1 Macro: 0.9131 | Validation F1 Macro : 0.6182
Training F1 Micro: 0.9005 | Validation F1 Micro : 0.6220
Epoch 12, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.5127e-02 | Validation Loss : 1.8726e-02
Training CC : 0.9434 | Validation CC : 0.9287
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.0680e-01 | Validation Loss : 8.7771e-01
Training F1 Macro: 0.8930 | Validation F1 Macro : 0.6297
Training F1 Micro: 0.9006 | Validation F1 Micro : 0.6300
Epoch 13, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.4305e-02 | Validation Loss : 1.6714e-02
Training CC : 0.9470 | Validation CC : 0.9365
** Classification Losses **
Training Loss : 2.1443e-01 | Validation Loss : 8.6231e-01
Training F1 Macro: 0.8839 | Validation F1 Macro : 0.6399
Training F1 Micro: 0.8928 | Validation F1 Micro : 0.6440
Epoch 13, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.3025e-02 | Validation Loss : 1.5881e-02
Training CC : 0.9522 | Validation CC : 0.9397
** Classification Losses **
Training Loss : 1.7527e-01 | Validation Loss : 8.4568e-01
Training F1 Macro: 0.9115 | Validation F1 Macro : 0.6464
Training F1 Micro: 0.9205 | Validation F1 Micro : 0.6500
Epoch 13, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.2185e-02 | Validation Loss : 1.5217e-02
Training CC : 0.9553 | Validation CC : 0.9422
** Classification Losses **
Training Loss : 1.6836e-01 | Validation Loss : 8.3084e-01
Training F1 Macro: 0.9136 | Validation F1 Macro : 0.6637
Training F1 Micro: 0.9199 | Validation F1 Micro : 0.6680
Epoch 13, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.1708e-02 | Validation Loss : 1.4811e-02
Training CC : 0.9574 | Validation CC : 0.9439
** Classification Losses **
Training Loss : 1.5179e-01 | Validation Loss : 8.5556e-01
Training F1 Macro: 0.9303 | Validation F1 Macro : 0.6488
Training F1 Micro: 0.9267 | Validation F1 Micro : 0.6520
Epoch 13, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.0926e-02 | Validation Loss : 1.4488e-02
Training CC : 0.9596 | Validation CC : 0.9454
** Classification Losses **
Training Loss : 1.6101e-01 | Validation Loss : 8.4275e-01
Training F1 Macro: 0.9315 | Validation F1 Macro : 0.6548
Training F1 Micro: 0.9523 | Validation F1 Micro : 0.6600
Epoch 14, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0402e-02 | Validation Loss : 1.4740e-02
Training CC : 0.9614 | Validation CC : 0.9444
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7058e-01 | Validation Loss : 8.3775e-01
Training F1 Macro: 0.9381 | Validation F1 Macro : 0.6698
Training F1 Micro: 0.9402 | Validation F1 Micro : 0.6740
Epoch 14, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.1162e-02 | Validation Loss : 1.5246e-02
Training CC : 0.9591 | Validation CC : 0.9424
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9570e-01 | Validation Loss : 8.7155e-01
Training F1 Macro: 0.8464 | Validation F1 Macro : 0.6625
Training F1 Micro: 0.9012 | Validation F1 Micro : 0.6660
Epoch 14, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.1356e-02 | Validation Loss : 1.5513e-02
Training CC : 0.9579 | Validation CC : 0.9413
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8357e-01 | Validation Loss : 8.6651e-01
Training F1 Macro: 0.9284 | Validation F1 Macro : 0.6500
Training F1 Micro: 0.9274 | Validation F1 Micro : 0.6540
Epoch 14, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.1680e-02 | Validation Loss : 1.5805e-02
Training CC : 0.9568 | Validation CC : 0.9401
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9267e-01 | Validation Loss : 8.7959e-01
Training F1 Macro: 0.9022 | Validation F1 Macro : 0.6232
Training F1 Micro: 0.9061 | Validation F1 Micro : 0.6280
Epoch 14, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.1952e-02 | Validation Loss : 1.6067e-02
Training CC : 0.9556 | Validation CC : 0.9391
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1783e-01 | Validation Loss : 8.5367e-01
Training F1 Macro: 0.9383 | Validation F1 Macro : 0.6733
Training F1 Micro: 0.9424 | Validation F1 Micro : 0.6780
Epoch 15, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.1371e-02 | Validation Loss : 1.5021e-02
Training CC : 0.9578 | Validation CC : 0.9432
** Classification Losses **
Training Loss : 2.6352e-01 | Validation Loss : 8.3245e-01
Training F1 Macro: 0.8697 | Validation F1 Macro : 0.6602
Training F1 Micro: 0.8753 | Validation F1 Micro : 0.6640
Epoch 15, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.0914e-02 | Validation Loss : 1.4526e-02
Training CC : 0.9600 | Validation CC : 0.9452
** Classification Losses **
Training Loss : 1.6287e-01 | Validation Loss : 8.3996e-01
Training F1 Macro: 0.9177 | Validation F1 Macro : 0.6691
Training F1 Micro: 0.9348 | Validation F1 Micro : 0.6720
Epoch 15, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.0366e-02 | Validation Loss : 1.4301e-02
Training CC : 0.9622 | Validation CC : 0.9460
** Classification Losses **
Training Loss : 1.4815e-01 | Validation Loss : 8.3877e-01
Training F1 Macro: 0.9482 | Validation F1 Macro : 0.6721
Training F1 Micro: 0.9478 | Validation F1 Micro : 0.6740
Epoch 15, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.8331e-03 | Validation Loss : 1.3929e-02
Training CC : 0.9638 | Validation CC : 0.9474
** Classification Losses **
Training Loss : 9.2873e-02 | Validation Loss : 8.5900e-01
Training F1 Macro: 0.9573 | Validation F1 Macro : 0.6487
Training F1 Micro: 0.9590 | Validation F1 Micro : 0.6540
Epoch 15, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.3855e-03 | Validation Loss : 1.3776e-02
Training CC : 0.9654 | Validation CC : 0.9482
** Classification Losses **
Training Loss : 1.3484e-01 | Validation Loss : 8.4049e-01
Training F1 Macro: 0.9553 | Validation F1 Macro : 0.6748
Training F1 Micro: 0.9591 | Validation F1 Micro : 0.6760
Epoch 16, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 9.3653e-03 | Validation Loss : 1.3777e-02
Training CC : 0.9657 | Validation CC : 0.9482
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3203e-01 | Validation Loss : 8.4416e-01
Training F1 Macro: 0.9486 | Validation F1 Macro : 0.6665
Training F1 Micro: 0.9518 | Validation F1 Micro : 0.6680
Epoch 16, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 9.3063e-03 | Validation Loss : 1.3871e-02
Training CC : 0.9657 | Validation CC : 0.9479
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8428e-01 | Validation Loss : 8.5315e-01
Training F1 Macro: 0.9073 | Validation F1 Macro : 0.6563
Training F1 Micro: 0.9105 | Validation F1 Micro : 0.6580
Epoch 16, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 9.3482e-03 | Validation Loss : 1.4009e-02
Training CC : 0.9654 | Validation CC : 0.9473
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.9010e-02 | Validation Loss : 8.7678e-01
Training F1 Macro: 0.9536 | Validation F1 Macro : 0.6413
Training F1 Micro: 0.9542 | Validation F1 Micro : 0.6440
Epoch 16, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 9.6327e-03 | Validation Loss : 1.4144e-02
Training CC : 0.9645 | Validation CC : 0.9468
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2682e-01 | Validation Loss : 9.0067e-01
Training F1 Macro: 0.9485 | Validation F1 Macro : 0.6325
Training F1 Micro: 0.9587 | Validation F1 Micro : 0.6340
Epoch 16, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 9.7647e-03 | Validation Loss : 1.4251e-02
Training CC : 0.9640 | Validation CC : 0.9463
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0845e-01 | Validation Loss : 8.6177e-01
Training F1 Macro: 0.9578 | Validation F1 Macro : 0.6502
Training F1 Micro: 0.9454 | Validation F1 Micro : 0.6540
Epoch 17, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.4497e-03 | Validation Loss : 1.3772e-02
Training CC : 0.9650 | Validation CC : 0.9481
** Classification Losses **
Training Loss : 1.6202e-01 | Validation Loss : 8.5356e-01
Training F1 Macro: 0.8868 | Validation F1 Macro : 0.6650
Training F1 Micro: 0.8992 | Validation F1 Micro : 0.6680
Epoch 17, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.2654e-03 | Validation Loss : 1.3456e-02
Training CC : 0.9662 | Validation CC : 0.9493
** Classification Losses **
Training Loss : 2.0396e-01 | Validation Loss : 8.3408e-01
Training F1 Macro: 0.9091 | Validation F1 Macro : 0.6467
Training F1 Micro: 0.9120 | Validation F1 Micro : 0.6500
Epoch 17, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 8.9745e-03 | Validation Loss : 1.3355e-02
Training CC : 0.9672 | Validation CC : 0.9497
** Classification Losses **
Training Loss : 1.3781e-01 | Validation Loss : 8.6047e-01
Training F1 Macro: 0.9214 | Validation F1 Macro : 0.6386
Training F1 Micro: 0.9383 | Validation F1 Micro : 0.6440
Epoch 17, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 8.8195e-03 | Validation Loss : 1.3154e-02
Training CC : 0.9679 | Validation CC : 0.9506
** Classification Losses **
Training Loss : 1.9023e-01 | Validation Loss : 8.4928e-01
Training F1 Macro: 0.8888 | Validation F1 Macro : 0.6667
Training F1 Micro: 0.9071 | Validation F1 Micro : 0.6680
Epoch 17, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 8.6157e-03 | Validation Loss : 1.3201e-02
Training CC : 0.9687 | Validation CC : 0.9505
** Classification Losses **
Training Loss : 1.4719e-01 | Validation Loss : 8.4225e-01
Training F1 Macro: 0.9113 | Validation F1 Macro : 0.6757
Training F1 Micro: 0.9200 | Validation F1 Micro : 0.6780
Epoch 18, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.3774e-03 | Validation Loss : 1.3181e-02
Training CC : 0.9693 | Validation CC : 0.9505
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5271e-01 | Validation Loss : 8.3087e-01
Training F1 Macro: 0.9156 | Validation F1 Macro : 0.6696
Training F1 Micro: 0.9216 | Validation F1 Micro : 0.6720
Epoch 18, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.4135e-03 | Validation Loss : 1.3251e-02
Training CC : 0.9692 | Validation CC : 0.9503
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7403e-01 | Validation Loss : 8.7227e-01
Training F1 Macro: 0.9213 | Validation F1 Macro : 0.6358
Training F1 Micro: 0.9188 | Validation F1 Micro : 0.6420
Epoch 18, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.5600e-03 | Validation Loss : 1.3404e-02
Training CC : 0.9687 | Validation CC : 0.9496
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7112e-01 | Validation Loss : 8.8280e-01
Training F1 Macro: 0.9297 | Validation F1 Macro : 0.6452
Training F1 Micro: 0.9209 | Validation F1 Micro : 0.6460
Epoch 18, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 9.1894e-03 | Validation Loss : 1.3514e-02
Training CC : 0.9672 | Validation CC : 0.9492
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0314e-01 | Validation Loss : 8.9198e-01
Training F1 Macro: 0.9431 | Validation F1 Macro : 0.6361
Training F1 Micro: 0.9462 | Validation F1 Micro : 0.6360
Epoch 18, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.7022e-03 | Validation Loss : 1.3546e-02
Training CC : 0.9680 | Validation CC : 0.9491
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5245e-01 | Validation Loss : 8.7273e-01
Training F1 Macro: 0.9139 | Validation F1 Macro : 0.6438
Training F1 Micro: 0.9120 | Validation F1 Micro : 0.6460
Epoch 19, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.6351e-03 | Validation Loss : 1.3262e-02
Training CC : 0.9662 | Validation CC : 0.9502
** Classification Losses **
Training Loss : 1.4997e-01 | Validation Loss : 8.7745e-01
Training F1 Macro: 0.8983 | Validation F1 Macro : 0.6264
Training F1 Micro: 0.9022 | Validation F1 Micro : 0.6260
Epoch 19, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 8.8020e-03 | Validation Loss : 1.2972e-02
Training CC : 0.9688 | Validation CC : 0.9513
** Classification Losses **
Training Loss : 1.2232e-01 | Validation Loss : 8.5785e-01
Training F1 Macro: 0.9535 | Validation F1 Macro : 0.6637
Training F1 Micro: 0.9592 | Validation F1 Micro : 0.6680
Epoch 19, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.5769e-03 | Validation Loss : 1.2744e-02
Training CC : 0.9670 | Validation CC : 0.9524
** Classification Losses **
Training Loss : 1.3234e-01 | Validation Loss : 8.5989e-01
Training F1 Macro: 0.9310 | Validation F1 Macro : 0.6593
Training F1 Micro: 0.9431 | Validation F1 Micro : 0.6600
Epoch 19, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.9503e-03 | Validation Loss : 1.2637e-02
Training CC : 0.9710 | Validation CC : 0.9524
** Classification Losses **
Training Loss : 1.4379e-01 | Validation Loss : 8.5471e-01
Training F1 Macro: 0.9125 | Validation F1 Macro : 0.6611
Training F1 Micro: 0.9323 | Validation F1 Micro : 0.6620
Epoch 19, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.7567e-03 | Validation Loss : 1.2581e-02
Training CC : 0.9718 | Validation CC : 0.9532
** Classification Losses **
Training Loss : 1.3960e-01 | Validation Loss : 8.4192e-01
Training F1 Macro: 0.9126 | Validation F1 Macro : 0.6726
Training F1 Micro: 0.9318 | Validation F1 Micro : 0.6800
Epoch 20, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.4657e-03 | Validation Loss : 1.2658e-02
Training CC : 0.9726 | Validation CC : 0.9530
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5172e-01 | Validation Loss : 8.6882e-01
Training F1 Macro: 0.9454 | Validation F1 Macro : 0.6556
Training F1 Micro: 0.9467 | Validation F1 Micro : 0.6560
Epoch 20, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.8149e-03 | Validation Loss : 1.2747e-02
Training CC : 0.9718 | Validation CC : 0.9526
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2112e-01 | Validation Loss : 8.6369e-01
Training F1 Macro: 0.9362 | Validation F1 Macro : 0.6560
Training F1 Micro: 0.9397 | Validation F1 Micro : 0.6580
Epoch 20, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.7463e-03 | Validation Loss : 1.2798e-02
Training CC : 0.9718 | Validation CC : 0.9524
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5293e-01 | Validation Loss : 8.5455e-01
Training F1 Macro: 0.9039 | Validation F1 Macro : 0.6618
Training F1 Micro: 0.9158 | Validation F1 Micro : 0.6640
Epoch 20, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.7043e-03 | Validation Loss : 1.2875e-02
Training CC : 0.9717 | Validation CC : 0.9521
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.6206e-01 | Validation Loss : 8.4736e-01
Training F1 Macro: 0.9281 | Validation F1 Macro : 0.6759
Training F1 Micro: 0.9342 | Validation F1 Micro : 0.6760
Epoch 20, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.2563e-03 | Validation Loss : 1.2997e-02
Training CC : 0.9705 | Validation CC : 0.9516
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7141e-01 | Validation Loss : 8.5383e-01
Training F1 Macro: 0.9108 | Validation F1 Macro : 0.6694
Training F1 Micro: 0.9116 | Validation F1 Micro : 0.6700
Epoch 21, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.7102e-03 | Validation Loss : 1.2534e-02
Training CC : 0.9718 | Validation CC : 0.9529
** Classification Losses **
Training Loss : 1.6567e-01 | Validation Loss : 8.6571e-01
Training F1 Macro: 0.8981 | Validation F1 Macro : 0.6701
Training F1 Micro: 0.9118 | Validation F1 Micro : 0.6700
Epoch 21, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.2278e-03 | Validation Loss : 1.2528e-02
Training CC : 0.9734 | Validation CC : 0.9532
** Classification Losses **
Training Loss : 9.3937e-02 | Validation Loss : 8.4098e-01
Training F1 Macro: 0.9457 | Validation F1 Macro : 0.6727
Training F1 Micro: 0.9493 | Validation F1 Micro : 0.6760
Epoch 21, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.1630e-03 | Validation Loss : 1.2346e-02
Training CC : 0.9739 | Validation CC : 0.9541
** Classification Losses **
Training Loss : 1.0077e-01 | Validation Loss : 8.4972e-01
Training F1 Macro: 0.8803 | Validation F1 Macro : 0.6569
Training F1 Micro: 0.9387 | Validation F1 Micro : 0.6620
Epoch 21, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.8948e-03 | Validation Loss : 1.2502e-02
Training CC : 0.9726 | Validation CC : 0.9537
** Classification Losses **
Training Loss : 1.9061e-01 | Validation Loss : 8.4001e-01
Training F1 Macro: 0.9014 | Validation F1 Macro : 0.6726
Training F1 Micro: 0.9081 | Validation F1 Micro : 0.6760
Epoch 21, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.2557e-03 | Validation Loss : 1.2409e-02
Training CC : 0.9739 | Validation CC : 0.9534
** Classification Losses **
Training Loss : 1.6901e-01 | Validation Loss : 8.6209e-01
Training F1 Macro: 0.9469 | Validation F1 Macro : 0.6626
Training F1 Micro: 0.9406 | Validation F1 Micro : 0.6640
Epoch 22, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.2670e-03 | Validation Loss : 1.2509e-02
Training CC : 0.9736 | Validation CC : 0.9530
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.3909e-01 | Validation Loss : 8.3192e-01
Training F1 Macro: 0.8593 | Validation F1 Macro : 0.6702
Training F1 Micro: 0.8630 | Validation F1 Micro : 0.6720
Epoch 22, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.9881e-03 | Validation Loss : 1.2611e-02
Training CC : 0.9720 | Validation CC : 0.9526
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.9960e-02 | Validation Loss : 8.4697e-01
Training F1 Macro: 0.9484 | Validation F1 Macro : 0.6510
Training F1 Micro: 0.9462 | Validation F1 Micro : 0.6560
Epoch 22, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.2603e-03 | Validation Loss : 1.2619e-02
Training CC : 0.9734 | Validation CC : 0.9526
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0235e-01 | Validation Loss : 8.7750e-01
Training F1 Macro: 0.9672 | Validation F1 Macro : 0.6185
Training F1 Micro: 0.9737 | Validation F1 Micro : 0.6160
Epoch 22, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.4607e-03 | Validation Loss : 1.2630e-02
Training CC : 0.9730 | Validation CC : 0.9525
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3184e-01 | Validation Loss : 8.5133e-01
Training F1 Macro: 0.9854 | Validation F1 Macro : 0.6330
Training F1 Micro: 0.9831 | Validation F1 Micro : 0.6360
Epoch 22, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.2900e-03 | Validation Loss : 1.2654e-02
Training CC : 0.9733 | Validation CC : 0.9524
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.0637e-02 | Validation Loss : 8.5801e-01
Training F1 Macro: 0.9771 | Validation F1 Macro : 0.6579
Training F1 Micro: 0.9715 | Validation F1 Micro : 0.6640
Epoch 23, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.4450e-03 | Validation Loss : 1.2751e-02
Training CC : 0.9733 | Validation CC : 0.9533
** Classification Losses **
Training Loss : 1.0397e-01 | Validation Loss : 8.5559e-01
Training F1 Macro: 0.9489 | Validation F1 Macro : 0.6641
Training F1 Micro: 0.9584 | Validation F1 Micro : 0.6660
Epoch 23, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.9086e-03 | Validation Loss : 1.2338e-02
Training CC : 0.9726 | Validation CC : 0.9536
** Classification Losses **
Training Loss : 1.5325e-01 | Validation Loss : 8.3072e-01
Training F1 Macro: 0.9345 | Validation F1 Macro : 0.6725
Training F1 Micro: 0.9393 | Validation F1 Micro : 0.6720
Epoch 23, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.9315e-03 | Validation Loss : 1.2355e-02
Training CC : 0.9745 | Validation CC : 0.9542
** Classification Losses **
Training Loss : 8.7520e-02 | Validation Loss : 8.5744e-01
Training F1 Macro: 0.9515 | Validation F1 Macro : 0.6702
Training F1 Micro: 0.9689 | Validation F1 Micro : 0.6700
Epoch 23, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.6386e-03 | Validation Loss : 1.1937e-02
Training CC : 0.9755 | Validation CC : 0.9556
** Classification Losses **
Training Loss : 2.3212e-01 | Validation Loss : 8.3361e-01
Training F1 Macro: 0.8921 | Validation F1 Macro : 0.6773
Training F1 Micro: 0.8836 | Validation F1 Micro : 0.6800
Epoch 23, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.8403e-03 | Validation Loss : 1.1840e-02
Training CC : 0.9755 | Validation CC : 0.9557
** Classification Losses **
Training Loss : 1.4800e-01 | Validation Loss : 8.3969e-01
Training F1 Macro: 0.9263 | Validation F1 Macro : 0.6680
Training F1 Micro: 0.9285 | Validation F1 Micro : 0.6720
Epoch 24, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.3760e-03 | Validation Loss : 1.1916e-02
Training CC : 0.9767 | Validation CC : 0.9554
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9337e-01 | Validation Loss : 8.5417e-01
Training F1 Macro: 0.8829 | Validation F1 Macro : 0.6304
Training F1 Micro: 0.8798 | Validation F1 Micro : 0.6320
Epoch 24, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.8312e-03 | Validation Loss : 1.2025e-02
Training CC : 0.9756 | Validation CC : 0.9549
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3159e-01 | Validation Loss : 8.4960e-01
Training F1 Macro: 0.9314 | Validation F1 Macro : 0.6426
Training F1 Micro: 0.9403 | Validation F1 Micro : 0.6440
Epoch 24, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.6599e-03 | Validation Loss : 1.2106e-02
Training CC : 0.9757 | Validation CC : 0.9546
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2548e-01 | Validation Loss : 8.6852e-01
Training F1 Macro: 0.9326 | Validation F1 Macro : 0.6410
Training F1 Micro: 0.9348 | Validation F1 Micro : 0.6400
Epoch 24, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.0657e-03 | Validation Loss : 1.2199e-02
Training CC : 0.9748 | Validation CC : 0.9543
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.6306e-02 | Validation Loss : 8.7201e-01
Training F1 Macro: 0.9516 | Validation F1 Macro : 0.6488
Training F1 Micro: 0.9573 | Validation F1 Micro : 0.6500
Epoch 24, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.9443e-03 | Validation Loss : 1.2281e-02
Training CC : 0.9748 | Validation CC : 0.9539
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5603e-01 | Validation Loss : 8.5981e-01
Training F1 Macro: 0.8968 | Validation F1 Macro : 0.6425
Training F1 Micro: 0.9048 | Validation F1 Micro : 0.6460
Epoch 25, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.7960e-03 | Validation Loss : 1.1807e-02
Training CC : 0.9755 | Validation CC : 0.9558
** Classification Losses **
Training Loss : 1.8760e-01 | Validation Loss : 8.3187e-01
Training F1 Macro: 0.9100 | Validation F1 Macro : 0.6752
Training F1 Micro: 0.9080 | Validation F1 Micro : 0.6780
Epoch 25, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.2898e-03 | Validation Loss : 1.1782e-02
Training CC : 0.9770 | Validation CC : 0.9563
** Classification Losses **
Training Loss : 1.7058e-01 | Validation Loss : 8.4101e-01
Training F1 Macro: 0.9096 | Validation F1 Macro : 0.6540
Training F1 Micro: 0.9099 | Validation F1 Micro : 0.6560
Epoch 25, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.2050e-03 | Validation Loss : 1.1777e-02
Training CC : 0.9775 | Validation CC : 0.9561
** Classification Losses **
Training Loss : 1.3243e-01 | Validation Loss : 8.4678e-01
Training F1 Macro: 0.9281 | Validation F1 Macro : 0.6581
Training F1 Micro: 0.9244 | Validation F1 Micro : 0.6620
Epoch 25, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.2643e-03 | Validation Loss : 1.1752e-02
Training CC : 0.9775 | Validation CC : 0.9563
** Classification Losses **
Training Loss : 1.6075e-01 | Validation Loss : 8.5280e-01
Training F1 Macro: 0.8845 | Validation F1 Macro : 0.6536
Training F1 Micro: 0.9310 | Validation F1 Micro : 0.6580
Epoch 25, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.6090e-03 | Validation Loss : 1.1833e-02
Training CC : 0.9741 | Validation CC : 0.9565
** Classification Losses **
Training Loss : 1.7908e-01 | Validation Loss : 8.3545e-01
Training F1 Macro: 0.9072 | Validation F1 Macro : 0.6727
Training F1 Micro: 0.9045 | Validation F1 Micro : 0.6740
Epoch 26, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.5355e-03 | Validation Loss : 1.1902e-02
Training CC : 0.9769 | Validation CC : 0.9562
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3773e-01 | Validation Loss : 8.6412e-01
Training F1 Macro: 0.9130 | Validation F1 Macro : 0.6296
Training F1 Micro: 0.9073 | Validation F1 Micro : 0.6260
Epoch 26, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.6034e-03 | Validation Loss : 1.2097e-02
Training CC : 0.9767 | Validation CC : 0.9555
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8693e-01 | Validation Loss : 8.2975e-01
Training F1 Macro: 0.8809 | Validation F1 Macro : 0.6717
Training F1 Micro: 0.9176 | Validation F1 Micro : 0.6760
Epoch 26, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.9234e-03 | Validation Loss : 1.2210e-02
Training CC : 0.9757 | Validation CC : 0.9551
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3756e-01 | Validation Loss : 8.4827e-01
Training F1 Macro: 0.9289 | Validation F1 Macro : 0.6630
Training F1 Micro: 0.9199 | Validation F1 Micro : 0.6680
Epoch 26, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.7050e-03 | Validation Loss : 1.2251e-02
Training CC : 0.9759 | Validation CC : 0.9549
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0720e-01 | Validation Loss : 8.4041e-01
Training F1 Macro: 0.9053 | Validation F1 Macro : 0.6773
Training F1 Micro: 0.9407 | Validation F1 Micro : 0.6780
Epoch 26, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.9454e-03 | Validation Loss : 1.2291e-02
Training CC : 0.9754 | Validation CC : 0.9548
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.0907e-01 | Validation Loss : 8.7245e-01
Training F1 Macro: 0.8698 | Validation F1 Macro : 0.6428
Training F1 Micro: 0.8692 | Validation F1 Micro : 0.6420
Epoch 27, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.5478e-03 | Validation Loss : 1.1961e-02
Training CC : 0.9762 | Validation CC : 0.9550
** Classification Losses **
Training Loss : 1.7095e-01 | Validation Loss : 8.2854e-01
Training F1 Macro: 0.8678 | Validation F1 Macro : 0.6808
Training F1 Micro: 0.9182 | Validation F1 Micro : 0.6840
Epoch 27, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.6915e-03 | Validation Loss : 1.2062e-02
Training CC : 0.9763 | Validation CC : 0.9557
** Classification Losses **
Training Loss : 1.8538e-01 | Validation Loss : 8.0424e-01
Training F1 Macro: 0.9061 | Validation F1 Macro : 0.6884
Training F1 Micro: 0.9211 | Validation F1 Micro : 0.6920
Epoch 27, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.1259e-03 | Validation Loss : 1.1760e-02
Training CC : 0.9776 | Validation CC : 0.9561
** Classification Losses **
Training Loss : 1.6386e-01 | Validation Loss : 8.1270e-01
Training F1 Macro: 0.9253 | Validation F1 Macro : 0.6897
Training F1 Micro: 0.9241 | Validation F1 Micro : 0.6900
Epoch 27, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.9430e-03 | Validation Loss : 1.1558e-02
Training CC : 0.9784 | Validation CC : 0.9572
** Classification Losses **
Training Loss : 1.1993e-01 | Validation Loss : 8.1793e-01
Training F1 Macro: 0.9353 | Validation F1 Macro : 0.6836
Training F1 Micro: 0.9362 | Validation F1 Micro : 0.6860
Epoch 27, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.6655e-03 | Validation Loss : 1.1323e-02
Training CC : 0.9791 | Validation CC : 0.9578
** Classification Losses **
Training Loss : 1.8439e-01 | Validation Loss : 8.0809e-01
Training F1 Macro: 0.8148 | Validation F1 Macro : 0.6776
Training F1 Micro: 0.8995 | Validation F1 Micro : 0.6780
Epoch 28, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.7710e-03 | Validation Loss : 1.1375e-02
Training CC : 0.9793 | Validation CC : 0.9576
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.6920e-01 | Validation Loss : 8.1023e-01
Training F1 Macro: 0.9095 | Validation F1 Macro : 0.6824
Training F1 Micro: 0.9059 | Validation F1 Micro : 0.6820
Epoch 28, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.6112e-03 | Validation Loss : 1.1456e-02
Training CC : 0.9795 | Validation CC : 0.9573
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8149e-01 | Validation Loss : 8.0933e-01
Training F1 Macro: 0.9325 | Validation F1 Macro : 0.6672
Training F1 Micro: 0.9297 | Validation F1 Micro : 0.6660
Epoch 28, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.7544e-03 | Validation Loss : 1.1528e-02
Training CC : 0.9791 | Validation CC : 0.9570
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.6036e-01 | Validation Loss : 8.2073e-01
Training F1 Macro: 0.9014 | Validation F1 Macro : 0.6646
Training F1 Micro: 0.9183 | Validation F1 Micro : 0.6640
Epoch 28, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.7814e-03 | Validation Loss : 1.1598e-02
Training CC : 0.9789 | Validation CC : 0.9568
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7690e-01 | Validation Loss : 8.3923e-01
Training F1 Macro: 0.9221 | Validation F1 Macro : 0.6722
Training F1 Micro: 0.9181 | Validation F1 Micro : 0.6720
Epoch 28, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.0055e-03 | Validation Loss : 1.1664e-02
Training CC : 0.9784 | Validation CC : 0.9565
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1044e-01 | Validation Loss : 8.5828e-01
Training F1 Macro: 0.9339 | Validation F1 Macro : 0.6266
Training F1 Micro: 0.9365 | Validation F1 Micro : 0.6260
Epoch 29, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.0260e-03 | Validation Loss : 1.1496e-02
Training CC : 0.9786 | Validation CC : 0.9571
** Classification Losses **
Training Loss : 2.1454e-01 | Validation Loss : 8.4726e-01
Training F1 Macro: 0.8673 | Validation F1 Macro : 0.6535
Training F1 Micro: 0.8693 | Validation F1 Micro : 0.6520
Epoch 29, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.7346e-03 | Validation Loss : 1.1429e-02
Training CC : 0.9793 | Validation CC : 0.9577
** Classification Losses **
Training Loss : 1.9326e-01 | Validation Loss : 8.1522e-01
Training F1 Macro: 0.8950 | Validation F1 Macro : 0.6816
Training F1 Micro: 0.8892 | Validation F1 Micro : 0.6820
Epoch 29, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.6571e-03 | Validation Loss : 1.1586e-02
Training CC : 0.9776 | Validation CC : 0.9571
** Classification Losses **
Training Loss : 1.5835e-01 | Validation Loss : 8.4449e-01
Training F1 Macro: 0.9246 | Validation F1 Macro : 0.6684
Training F1 Micro: 0.9187 | Validation F1 Micro : 0.6680
Epoch 29, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.0160e-03 | Validation Loss : 1.1712e-02
Training CC : 0.9786 | Validation CC : 0.9570
** Classification Losses **
Training Loss : 1.3173e-01 | Validation Loss : 8.4895e-01
Training F1 Macro: 0.9461 | Validation F1 Macro : 0.6545
Training F1 Micro: 0.9376 | Validation F1 Micro : 0.6540
Epoch 29, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.0046e-03 | Validation Loss : 1.1446e-02
Training CC : 0.9786 | Validation CC : 0.9573
** Classification Losses **
Training Loss : 2.1249e-01 | Validation Loss : 8.2564e-01
Training F1 Macro: 0.8971 | Validation F1 Macro : 0.6800
Training F1 Micro: 0.8965 | Validation F1 Micro : 0.6800
Epoch 30, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.5131e-03 | Validation Loss : 1.1506e-02
Training CC : 0.9798 | Validation CC : 0.9571
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1722e-01 | Validation Loss : 8.5023e-01
Training F1 Macro: 0.9470 | Validation F1 Macro : 0.6554
Training F1 Micro: 0.9469 | Validation F1 Micro : 0.6560
Epoch 30, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.1789e-03 | Validation Loss : 1.1620e-02
Training CC : 0.9784 | Validation CC : 0.9566
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.3728e-02 | Validation Loss : 8.6545e-01
Training F1 Macro: 0.9165 | Validation F1 Macro : 0.6456
Training F1 Micro: 0.9627 | Validation F1 Micro : 0.6460
Epoch 30, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.9624e-03 | Validation Loss : 1.1678e-02
Training CC : 0.9787 | Validation CC : 0.9564
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3745e-01 | Validation Loss : 8.4971e-01
Training F1 Macro: 0.9098 | Validation F1 Macro : 0.6606
Training F1 Micro: 0.9183 | Validation F1 Micro : 0.6580
Epoch 30, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.7588e-03 | Validation Loss : 1.1750e-02
Training CC : 0.9790 | Validation CC : 0.9561
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2439e-01 | Validation Loss : 8.3425e-01
Training F1 Macro: 0.9499 | Validation F1 Macro : 0.6723
Training F1 Micro: 0.9469 | Validation F1 Micro : 0.6720
Epoch 30, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.0904e-03 | Validation Loss : 1.1768e-02
Training CC : 0.9783 | Validation CC : 0.9560
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5059e-01 | Validation Loss : 8.3959e-01
Training F1 Macro: 0.9306 | Validation F1 Macro : 0.6670
Training F1 Micro: 0.9508 | Validation F1 Micro : 0.6680
Epoch 31, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.8120e-03 | Validation Loss : 1.1515e-02
Training CC : 0.9789 | Validation CC : 0.9570
** Classification Losses **
Training Loss : 1.4424e-01 | Validation Loss : 8.4361e-01
Training F1 Macro: 0.9326 | Validation F1 Macro : 0.6708
Training F1 Micro: 0.9301 | Validation F1 Micro : 0.6700
Epoch 31, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.6793e-03 | Validation Loss : 1.1356e-02
Training CC : 0.9798 | Validation CC : 0.9579
** Classification Losses **
Training Loss : 1.1209e-01 | Validation Loss : 8.2991e-01
Training F1 Macro: 0.9419 | Validation F1 Macro : 0.6449
Training F1 Micro: 0.9523 | Validation F1 Micro : 0.6460
Epoch 31, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.6743e-03 | Validation Loss : 1.1059e-02
Training CC : 0.9799 | Validation CC : 0.9589
** Classification Losses **
Training Loss : 1.5746e-01 | Validation Loss : 8.3515e-01
Training F1 Macro: 0.9225 | Validation F1 Macro : 0.6828
Training F1 Micro: 0.9186 | Validation F1 Micro : 0.6840
Epoch 31, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.2405e-03 | Validation Loss : 1.0944e-02
Training CC : 0.9792 | Validation CC : 0.9588
** Classification Losses **
Training Loss : 1.5113e-01 | Validation Loss : 8.4694e-01
Training F1 Macro: 0.9277 | Validation F1 Macro : 0.6555
Training F1 Micro: 0.9297 | Validation F1 Micro : 0.6540
Epoch 31, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.0254e-03 | Validation Loss : 1.1388e-02
Training CC : 0.9794 | Validation CC : 0.9580
** Classification Losses **
Training Loss : 1.9788e-01 | Validation Loss : 8.5736e-01
Training F1 Macro: 0.9060 | Validation F1 Macro : 0.6525
Training F1 Micro: 0.9128 | Validation F1 Micro : 0.6520
Epoch 32, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.3601e-03 | Validation Loss : 1.1427e-02
Training CC : 0.9805 | Validation CC : 0.9579
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.0559e-01 | Validation Loss : 8.6597e-01
Training F1 Macro: 0.9198 | Validation F1 Macro : 0.6267
Training F1 Micro: 0.9187 | Validation F1 Micro : 0.6280
Epoch 32, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.3356e-03 | Validation Loss : 1.1508e-02
Training CC : 0.9805 | Validation CC : 0.9575
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9381e-01 | Validation Loss : 8.4151e-01
Training F1 Macro: 0.8839 | Validation F1 Macro : 0.6377
Training F1 Micro: 0.8894 | Validation F1 Micro : 0.6400
Epoch 32, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.9008e-03 | Validation Loss : 1.1651e-02
Training CC : 0.9792 | Validation CC : 0.9570
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9346e-01 | Validation Loss : 8.5622e-01
Training F1 Macro: 0.8791 | Validation F1 Macro : 0.6252
Training F1 Micro: 0.8857 | Validation F1 Micro : 0.6220
Epoch 32, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.8426e-03 | Validation Loss : 1.1760e-02
Training CC : 0.9791 | Validation CC : 0.9566
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.2083e-01 | Validation Loss : 8.5743e-01
Training F1 Macro: 0.7958 | Validation F1 Macro : 0.6458
Training F1 Micro: 0.8593 | Validation F1 Micro : 0.6420
Epoch 32, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.9243e-03 | Validation Loss : 1.1847e-02
Training CC : 0.9788 | Validation CC : 0.9563
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2096e-01 | Validation Loss : 8.5580e-01
Training F1 Macro: 0.9475 | Validation F1 Macro : 0.6600
Training F1 Micro: 0.9429 | Validation F1 Micro : 0.6580
Epoch 33, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.7308e-03 | Validation Loss : 1.1139e-02
Training CC : 0.9793 | Validation CC : 0.9585
** Classification Losses **
Training Loss : 1.0056e-01 | Validation Loss : 8.3411e-01
Training F1 Macro: 0.9457 | Validation F1 Macro : 0.6534
Training F1 Micro: 0.9511 | Validation F1 Micro : 0.6520
Epoch 33, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.7654e-03 | Validation Loss : 1.1488e-02
Training CC : 0.9800 | Validation CC : 0.9570
** Classification Losses **
Training Loss : 1.5000e-01 | Validation Loss : 8.2819e-01
Training F1 Macro: 0.9227 | Validation F1 Macro : 0.6874
Training F1 Micro: 0.9258 | Validation F1 Micro : 0.6880
Epoch 33, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.6835e-03 | Validation Loss : 1.1422e-02
Training CC : 0.9799 | Validation CC : 0.9585
** Classification Losses **
Training Loss : 1.1001e-01 | Validation Loss : 8.2048e-01
Training F1 Macro: 0.9374 | Validation F1 Macro : 0.6890
Training F1 Micro: 0.9436 | Validation F1 Micro : 0.6900
Epoch 33, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.4570e-03 | Validation Loss : 1.1138e-02
Training CC : 0.9805 | Validation CC : 0.9585
** Classification Losses **
Training Loss : 1.2294e-01 | Validation Loss : 8.4083e-01
Training F1 Macro: 0.9223 | Validation F1 Macro : 0.6706
Training F1 Micro: 0.9249 | Validation F1 Micro : 0.6700
Epoch 33, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.0237e-03 | Validation Loss : 1.0914e-02
Training CC : 0.9816 | Validation CC : 0.9594
** Classification Losses **
Training Loss : 1.6386e-01 | Validation Loss : 8.2613e-01
Training F1 Macro: 0.8744 | Validation F1 Macro : 0.6899
Training F1 Micro: 0.8916 | Validation F1 Micro : 0.6880
Epoch 34, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.2525e-03 | Validation Loss : 1.0958e-02
Training CC : 0.9816 | Validation CC : 0.9592
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2938e-01 | Validation Loss : 8.4086e-01
Training F1 Macro: 0.9165 | Validation F1 Macro : 0.6486
Training F1 Micro: 0.9256 | Validation F1 Micro : 0.6500
Epoch 34, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.4826e-03 | Validation Loss : 1.1001e-02
Training CC : 0.9811 | Validation CC : 0.9590
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.2004e-02 | Validation Loss : 8.3908e-01
Training F1 Macro: 0.9551 | Validation F1 Macro : 0.6436
Training F1 Micro: 0.9610 | Validation F1 Micro : 0.6460
Epoch 34, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.2060e-03 | Validation Loss : 1.1070e-02
Training CC : 0.9815 | Validation CC : 0.9588
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.1355e-01 | Validation Loss : 8.4285e-01
Training F1 Macro: 0.8472 | Validation F1 Macro : 0.6510
Training F1 Micro: 0.8849 | Validation F1 Micro : 0.6520
Epoch 34, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.4988e-03 | Validation Loss : 1.1142e-02
Training CC : 0.9807 | Validation CC : 0.9585
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.8868e-02 | Validation Loss : 8.3758e-01
Training F1 Macro: 0.9612 | Validation F1 Macro : 0.6711
Training F1 Micro: 0.9598 | Validation F1 Micro : 0.6740
Epoch 34, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.3507e-03 | Validation Loss : 1.1204e-02
Training CC : 0.9809 | Validation CC : 0.9582
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.3302e-02 | Validation Loss : 8.3785e-01
Training F1 Macro: 0.9596 | Validation F1 Macro : 0.6738
Training F1 Micro: 0.9610 | Validation F1 Micro : 0.6740
Epoch 35, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.4528e-03 | Validation Loss : 1.1062e-02
Training CC : 0.9808 | Validation CC : 0.9592
** Classification Losses **
Training Loss : 1.3513e-01 | Validation Loss : 8.2246e-01
Training F1 Macro: 0.9206 | Validation F1 Macro : 0.6680
Training F1 Micro: 0.9316 | Validation F1 Micro : 0.6680
Epoch 35, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.2868e-03 | Validation Loss : 1.1098e-02
Training CC : 0.9812 | Validation CC : 0.9589
** Classification Losses **
Training Loss : 1.1932e-01 | Validation Loss : 8.1787e-01
Training F1 Macro: 0.8695 | Validation F1 Macro : 0.6692
Training F1 Micro: 0.9305 | Validation F1 Micro : 0.6720
Epoch 35, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.1780e-03 | Validation Loss : 1.0840e-02
Training CC : 0.9816 | Validation CC : 0.9595
** Classification Losses **
Training Loss : 1.3744e-01 | Validation Loss : 8.6061e-01
Training F1 Macro: 0.9094 | Validation F1 Macro : 0.6458
Training F1 Micro: 0.9227 | Validation F1 Micro : 0.6440
Epoch 35, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9555e-03 | Validation Loss : 1.0792e-02
Training CC : 0.9822 | Validation CC : 0.9601
** Classification Losses **
Training Loss : 1.8142e-01 | Validation Loss : 8.3650e-01
Training F1 Macro: 0.9169 | Validation F1 Macro : 0.6746
Training F1 Micro: 0.9196 | Validation F1 Micro : 0.6740
Epoch 35, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7285e-03 | Validation Loss : 1.0895e-02
Training CC : 0.9828 | Validation CC : 0.9600
** Classification Losses **
Training Loss : 7.7013e-02 | Validation Loss : 8.3874e-01
Training F1 Macro: 0.9728 | Validation F1 Macro : 0.6737
Training F1 Micro: 0.9789 | Validation F1 Micro : 0.6740
Epoch 36, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.1461e-03 | Validation Loss : 1.0959e-02
Training CC : 0.9821 | Validation CC : 0.9598
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9280e-01 | Validation Loss : 8.2796e-01
Training F1 Macro: 0.9225 | Validation F1 Macro : 0.6744
Training F1 Micro: 0.9094 | Validation F1 Micro : 0.6740
Epoch 36, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.6800e-03 | Validation Loss : 1.1036e-02
Training CC : 0.9829 | Validation CC : 0.9595
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3934e-01 | Validation Loss : 8.6659e-01
Training F1 Macro: 0.9401 | Validation F1 Macro : 0.6448
Training F1 Micro: 0.9454 | Validation F1 Micro : 0.6440
Epoch 36, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.9012e-03 | Validation Loss : 1.1089e-02
Training CC : 0.9823 | Validation CC : 0.9593
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.6569e-01 | Validation Loss : 8.9709e-01
Training F1 Macro: 0.9130 | Validation F1 Macro : 0.6372
Training F1 Micro: 0.9113 | Validation F1 Micro : 0.6440
Epoch 36, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.9792e-03 | Validation Loss : 1.1180e-02
Training CC : 0.9820 | Validation CC : 0.9589
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2410e-01 | Validation Loss : 9.3277e-01
Training F1 Macro: 0.9284 | Validation F1 Macro : 0.6348
Training F1 Micro: 0.9260 | Validation F1 Micro : 0.6380
Epoch 36, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.2045e-03 | Validation Loss : 1.1218e-02
Training CC : 0.9814 | Validation CC : 0.9587
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0852e-01 | Validation Loss : 9.1714e-01
Training F1 Macro: 0.9738 | Validation F1 Macro : 0.6270
Training F1 Micro: 0.9808 | Validation F1 Micro : 0.6300
Epoch 37, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9601e-03 | Validation Loss : 1.0922e-02
Training CC : 0.9821 | Validation CC : 0.9594
** Classification Losses **
Training Loss : 1.1953e-01 | Validation Loss : 8.9719e-01
Training F1 Macro: 0.9251 | Validation F1 Macro : 0.6279
Training F1 Micro: 0.9465 | Validation F1 Micro : 0.6300
Epoch 37, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9759e-03 | Validation Loss : 1.1075e-02
Training CC : 0.9824 | Validation CC : 0.9593
** Classification Losses **
Training Loss : 1.0380e-01 | Validation Loss : 8.4152e-01
Training F1 Macro: 0.9585 | Validation F1 Macro : 0.6540
Training F1 Micro: 0.9562 | Validation F1 Micro : 0.6580
Epoch 37, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.6573e-03 | Validation Loss : 1.0847e-02
Training CC : 0.9831 | Validation CC : 0.9596
** Classification Losses **
Training Loss : 1.6125e-01 | Validation Loss : 8.2798e-01
Training F1 Macro: 0.9458 | Validation F1 Macro : 0.6676
Training F1 Micro: 0.9377 | Validation F1 Micro : 0.6700
Epoch 37, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9003e-03 | Validation Loss : 1.0743e-02
Training CC : 0.9825 | Validation CC : 0.9602
** Classification Losses **
Training Loss : 1.7077e-01 | Validation Loss : 8.3667e-01
Training F1 Macro: 0.9017 | Validation F1 Macro : 0.6774
Training F1 Micro: 0.9174 | Validation F1 Micro : 0.6780
Epoch 37, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7685e-03 | Validation Loss : 1.0608e-02
Training CC : 0.9832 | Validation CC : 0.9603
** Classification Losses **
Training Loss : 5.6553e-02 | Validation Loss : 8.3683e-01
Training F1 Macro: 0.9661 | Validation F1 Macro : 0.6776
Training F1 Micro: 0.9718 | Validation F1 Micro : 0.6800
Epoch 38, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5170e-03 | Validation Loss : 1.0675e-02
Training CC : 0.9837 | Validation CC : 0.9600
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2985e-01 | Validation Loss : 9.0631e-01
Training F1 Macro: 0.9555 | Validation F1 Macro : 0.6193
Training F1 Micro: 0.9611 | Validation F1 Micro : 0.6240
Epoch 38, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.7287e-03 | Validation Loss : 1.0858e-02
Training CC : 0.9831 | Validation CC : 0.9593
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.7159e-01 | Validation Loss : 9.4132e-01
Training F1 Macro: 0.8815 | Validation F1 Macro : 0.6139
Training F1 Micro: 0.8859 | Validation F1 Micro : 0.6180
Epoch 38, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8618e-03 | Validation Loss : 1.0875e-02
Training CC : 0.9824 | Validation CC : 0.9593
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.8937e-01 | Validation Loss : 9.0618e-01
Training F1 Macro: 0.8927 | Validation F1 Macro : 0.6224
Training F1 Micro: 0.8929 | Validation F1 Micro : 0.6280
Epoch 38, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0636e-03 | Validation Loss : 1.0983e-02
Training CC : 0.9815 | Validation CC : 0.9589
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.6501e-01 | Validation Loss : 8.6435e-01
Training F1 Macro: 0.9261 | Validation F1 Macro : 0.6551
Training F1 Micro: 0.9309 | Validation F1 Micro : 0.6560
Epoch 38, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.6307e-03 | Validation Loss : 1.1079e-02
Training CC : 0.9802 | Validation CC : 0.9585
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.0520e-01 | Validation Loss : 8.1722e-01
Training F1 Macro: 0.8836 | Validation F1 Macro : 0.6411
Training F1 Micro: 0.8921 | Validation F1 Micro : 0.6360
Epoch 39, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.2603e-03 | Validation Loss : 1.0959e-02
Training CC : 0.9811 | Validation CC : 0.9598
** Classification Losses **
Training Loss : 1.6613e-01 | Validation Loss : 8.6599e-01
Training F1 Macro: 0.9382 | Validation F1 Macro : 0.6021
Training F1 Micro: 0.9363 | Validation F1 Micro : 0.6020
Epoch 39, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.0251e-03 | Validation Loss : 1.0636e-02
Training CC : 0.9820 | Validation CC : 0.9607
** Classification Losses **
Training Loss : 1.2532e-01 | Validation Loss : 8.6399e-01
Training F1 Macro: 0.9486 | Validation F1 Macro : 0.6307
Training F1 Micro: 0.9521 | Validation F1 Micro : 0.6300
Epoch 39, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8521e-03 | Validation Loss : 1.1108e-02
Training CC : 0.9826 | Validation CC : 0.9589
** Classification Losses **
Training Loss : 9.6658e-02 | Validation Loss : 8.4316e-01
Training F1 Macro: 0.9549 | Validation F1 Macro : 0.6334
Training F1 Micro: 0.9636 | Validation F1 Micro : 0.6340
Epoch 39, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9373e-03 | Validation Loss : 1.0532e-02
Training CC : 0.9824 | Validation CC : 0.9610
** Classification Losses **
Training Loss : 1.8196e-01 | Validation Loss : 8.3149e-01
Training F1 Macro: 0.8737 | Validation F1 Macro : 0.6511
Training F1 Micro: 0.8893 | Validation F1 Micro : 0.6540
Epoch 39, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.3936e-03 | Validation Loss : 1.0513e-02
Training CC : 0.9840 | Validation CC : 0.9613
** Classification Losses **
Training Loss : 2.0105e-01 | Validation Loss : 8.4996e-01
Training F1 Macro: 0.9282 | Validation F1 Macro : 0.6522
Training F1 Micro: 0.9309 | Validation F1 Micro : 0.6520
Epoch 40, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.3714e-03 | Validation Loss : 1.0631e-02
Training CC : 0.9842 | Validation CC : 0.9609
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0793e-01 | Validation Loss : 8.2760e-01
Training F1 Macro: 0.9235 | Validation F1 Macro : 0.6619
Training F1 Micro: 0.9368 | Validation F1 Micro : 0.6640
Epoch 40, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5324e-03 | Validation Loss : 1.0775e-02
Training CC : 0.9836 | Validation CC : 0.9603
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1126e-01 | Validation Loss : 8.3987e-01
Training F1 Macro: 0.9561 | Validation F1 Macro : 0.6651
Training F1 Micro: 0.9558 | Validation F1 Micro : 0.6680
Epoch 40, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0310e-03 | Validation Loss : 1.0854e-02
Training CC : 0.9825 | Validation CC : 0.9600
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1347e-01 | Validation Loss : 8.3808e-01
Training F1 Macro: 0.9488 | Validation F1 Macro : 0.6586
Training F1 Micro: 0.9403 | Validation F1 Micro : 0.6580
Epoch 40, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8466e-03 | Validation Loss : 1.0899e-02
Training CC : 0.9828 | Validation CC : 0.9599
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1705e-01 | Validation Loss : 8.5866e-01
Training F1 Macro: 0.9330 | Validation F1 Macro : 0.6414
Training F1 Micro: 0.9385 | Validation F1 Micro : 0.6420
Epoch 40, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.1352e-03 | Validation Loss : 1.0945e-02
Training CC : 0.9821 | Validation CC : 0.9597
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.6949e-02 | Validation Loss : 8.4555e-01
Training F1 Macro: 0.9718 | Validation F1 Macro : 0.6398
Training F1 Micro: 0.9793 | Validation F1 Micro : 0.6420
Epoch 41, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.0206e-03 | Validation Loss : 1.0775e-02
Training CC : 0.9825 | Validation CC : 0.9599
** Classification Losses **
Training Loss : 6.1587e-02 | Validation Loss : 8.3481e-01
Training F1 Macro: 0.9618 | Validation F1 Macro : 0.6703
Training F1 Micro: 0.9707 | Validation F1 Micro : 0.6700
Epoch 41, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.6551e-03 | Validation Loss : 1.0587e-02
Training CC : 0.9834 | Validation CC : 0.9607
** Classification Losses **
Training Loss : 1.6079e-01 | Validation Loss : 8.4015e-01
Training F1 Macro: 0.9110 | Validation F1 Macro : 0.6451
Training F1 Micro: 0.9175 | Validation F1 Micro : 0.6440
Epoch 41, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.4449e-03 | Validation Loss : 1.0927e-02
Training CC : 0.9820 | Validation CC : 0.9604
** Classification Losses **
Training Loss : 1.1766e-01 | Validation Loss : 8.0699e-01
Training F1 Macro: 0.8826 | Validation F1 Macro : 0.6785
Training F1 Micro: 0.9401 | Validation F1 Micro : 0.6780
Epoch 41, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.0031e-03 | Validation Loss : 1.0790e-02
Training CC : 0.9827 | Validation CC : 0.9595
** Classification Losses **
Training Loss : 1.9159e-01 | Validation Loss : 8.1177e-01
Training F1 Macro: 0.9110 | Validation F1 Macro : 0.6700
Training F1 Micro: 0.9144 | Validation F1 Micro : 0.6700
Epoch 41, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7549e-03 | Validation Loss : 1.0733e-02
Training CC : 0.9832 | Validation CC : 0.9610
** Classification Losses **
Training Loss : 1.0062e-01 | Validation Loss : 8.1848e-01
Training F1 Macro: 0.9293 | Validation F1 Macro : 0.6452
Training F1 Micro: 0.9408 | Validation F1 Micro : 0.6440
Epoch 42, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.7192e-03 | Validation Loss : 1.0761e-02
Training CC : 0.9833 | Validation CC : 0.9609
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2545e-01 | Validation Loss : 8.0116e-01
Training F1 Macro: 0.9525 | Validation F1 Macro : 0.6719
Training F1 Micro: 0.9607 | Validation F1 Micro : 0.6720
Epoch 42, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.6895e-03 | Validation Loss : 1.0768e-02
Training CC : 0.9833 | Validation CC : 0.9608
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.0419e-01 | Validation Loss : 8.5228e-01
Training F1 Macro: 0.9140 | Validation F1 Macro : 0.6441
Training F1 Micro: 0.9109 | Validation F1 Micro : 0.6480
Epoch 42, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5664e-03 | Validation Loss : 1.0793e-02
Training CC : 0.9835 | Validation CC : 0.9607
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2569e-01 | Validation Loss : 8.6683e-01
Training F1 Macro: 0.9439 | Validation F1 Macro : 0.6389
Training F1 Micro: 0.9428 | Validation F1 Micro : 0.6400
Epoch 42, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.7742e-03 | Validation Loss : 1.0817e-02
Training CC : 0.9830 | Validation CC : 0.9606
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5519e-01 | Validation Loss : 8.6825e-01
Training F1 Macro: 0.8402 | Validation F1 Macro : 0.6371
Training F1 Micro: 0.9062 | Validation F1 Micro : 0.6380
Epoch 42, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.4573e-03 | Validation Loss : 1.0842e-02
Training CC : 0.9818 | Validation CC : 0.9605
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3222e-01 | Validation Loss : 8.4281e-01
Training F1 Macro: 0.8828 | Validation F1 Macro : 0.6519
Training F1 Micro: 0.9201 | Validation F1 Micro : 0.6540
Epoch 43, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.6141e-03 | Validation Loss : 1.0668e-02
Training CC : 0.9833 | Validation CC : 0.9600
** Classification Losses **
Training Loss : 1.3341e-01 | Validation Loss : 8.5130e-01
Training F1 Macro: 0.9226 | Validation F1 Macro : 0.6170
Training F1 Micro: 0.9202 | Validation F1 Micro : 0.6180
Epoch 43, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8731e-03 | Validation Loss : 1.0793e-02
Training CC : 0.9831 | Validation CC : 0.9609
** Classification Losses **
Training Loss : 1.2119e-01 | Validation Loss : 8.3863e-01
Training F1 Macro: 0.9338 | Validation F1 Macro : 0.6459
Training F1 Micro: 0.9336 | Validation F1 Micro : 0.6480
Epoch 43, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8705e-03 | Validation Loss : 1.0652e-02
Training CC : 0.9832 | Validation CC : 0.9602
** Classification Losses **
Training Loss : 8.4504e-02 | Validation Loss : 8.2068e-01
Training F1 Macro: 0.9721 | Validation F1 Macro : 0.6556
Training F1 Micro: 0.9738 | Validation F1 Micro : 0.6540
Epoch 43, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8990e-03 | Validation Loss : 1.0347e-02
Training CC : 0.9829 | Validation CC : 0.9618
** Classification Losses **
Training Loss : 1.8049e-01 | Validation Loss : 8.2925e-01
Training F1 Macro: 0.9013 | Validation F1 Macro : 0.6451
Training F1 Micro: 0.9113 | Validation F1 Micro : 0.6460
Epoch 43, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7190e-03 | Validation Loss : 1.0913e-02
Training CC : 0.9835 | Validation CC : 0.9602
** Classification Losses **
Training Loss : 1.5133e-01 | Validation Loss : 8.1947e-01
Training F1 Macro: 0.9246 | Validation F1 Macro : 0.6669
Training F1 Micro: 0.9209 | Validation F1 Micro : 0.6660
Epoch 44, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8816e-03 | Validation Loss : 1.0973e-02
Training CC : 0.9825 | Validation CC : 0.9600
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9837e-01 | Validation Loss : 8.3893e-01
Training F1 Macro: 0.9208 | Validation F1 Macro : 0.6379
Training F1 Micro: 0.9192 | Validation F1 Micro : 0.6360
Epoch 44, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0027e-03 | Validation Loss : 1.1031e-02
Training CC : 0.9823 | Validation CC : 0.9598
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2891e-01 | Validation Loss : 8.5044e-01
Training F1 Macro: 0.9255 | Validation F1 Macro : 0.6440
Training F1 Micro: 0.9319 | Validation F1 Micro : 0.6480
Epoch 44, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0973e-03 | Validation Loss : 1.1041e-02
Training CC : 0.9820 | Validation CC : 0.9597
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.4647e-01 | Validation Loss : 8.7057e-01
Training F1 Macro: 0.9091 | Validation F1 Macro : 0.6347
Training F1 Micro: 0.9126 | Validation F1 Micro : 0.6380
Epoch 44, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.9818e-03 | Validation Loss : 1.1052e-02
Training CC : 0.9821 | Validation CC : 0.9597
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.9880e-01 | Validation Loss : 9.0095e-01
Training F1 Macro: 0.8668 | Validation F1 Macro : 0.6402
Training F1 Micro: 0.8772 | Validation F1 Micro : 0.6440
Epoch 44, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.9188e-03 | Validation Loss : 1.1133e-02
Training CC : 0.9822 | Validation CC : 0.9593
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7464e-01 | Validation Loss : 8.5808e-01
Training F1 Macro: 0.9069 | Validation F1 Macro : 0.6363
Training F1 Micro: 0.9040 | Validation F1 Micro : 0.6380
Epoch 45, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.6926e-03 | Validation Loss : 1.0841e-02
Training CC : 0.9829 | Validation CC : 0.9596
** Classification Losses **
Training Loss : 1.9387e-01 | Validation Loss : 8.3898e-01
Training F1 Macro: 0.9030 | Validation F1 Macro : 0.6428
Training F1 Micro: 0.9013 | Validation F1 Micro : 0.6440
Epoch 45, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.1664e-03 | Validation Loss : 1.0579e-02
Training CC : 0.9827 | Validation CC : 0.9610
** Classification Losses **
Training Loss : 9.5003e-02 | Validation Loss : 8.5743e-01
Training F1 Macro: 0.9628 | Validation F1 Macro : 0.6361
Training F1 Micro: 0.9622 | Validation F1 Micro : 0.6380
Epoch 45, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7512e-03 | Validation Loss : 1.0441e-02
Training CC : 0.9837 | Validation CC : 0.9617
** Classification Losses **
Training Loss : 1.5454e-01 | Validation Loss : 8.2257e-01
Training F1 Macro: 0.9505 | Validation F1 Macro : 0.6542
Training F1 Micro: 0.9413 | Validation F1 Micro : 0.6560
Epoch 45, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7732e-03 | Validation Loss : 1.0324e-02
Training CC : 0.9839 | Validation CC : 0.9615
** Classification Losses **
Training Loss : 1.3215e-01 | Validation Loss : 8.3647e-01
Training F1 Macro: 0.9379 | Validation F1 Macro : 0.6712
Training F1 Micro: 0.9450 | Validation F1 Micro : 0.6720
Epoch 45, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.4199e-03 | Validation Loss : 1.0442e-02
Training CC : 0.9846 | Validation CC : 0.9620
** Classification Losses **
Training Loss : 2.0394e-01 | Validation Loss : 8.5028e-01
Training F1 Macro: 0.8702 | Validation F1 Macro : 0.6474
Training F1 Micro: 0.9086 | Validation F1 Micro : 0.6480
Epoch 46, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.2274e-03 | Validation Loss : 1.0442e-02
Training CC : 0.9849 | Validation CC : 0.9620
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.5148e-02 | Validation Loss : 8.2594e-01
Training F1 Macro: 0.9694 | Validation F1 Macro : 0.6511
Training F1 Micro: 0.9656 | Validation F1 Micro : 0.6520
Epoch 46, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.1581e-03 | Validation Loss : 1.0478e-02
Training CC : 0.9850 | Validation CC : 0.9618
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.6482e-01 | Validation Loss : 8.3136e-01
Training F1 Macro: 0.9453 | Validation F1 Macro : 0.6472
Training F1 Micro: 0.9397 | Validation F1 Micro : 0.6460
Epoch 46, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8056e-03 | Validation Loss : 1.0535e-02
Training CC : 0.9837 | Validation CC : 0.9616
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.3537e-02 | Validation Loss : 8.4890e-01
Training F1 Macro: 0.9483 | Validation F1 Macro : 0.6506
Training F1 Micro: 0.9445 | Validation F1 Micro : 0.6520
Epoch 46, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.4704e-03 | Validation Loss : 1.0588e-02
Training CC : 0.9842 | Validation CC : 0.9614
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.4201e-01 | Validation Loss : 8.7496e-01
Training F1 Macro: 0.9118 | Validation F1 Macro : 0.6434
Training F1 Micro: 0.9259 | Validation F1 Micro : 0.6440
Epoch 46, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0832e-03 | Validation Loss : 1.0632e-02
Training CC : 0.9830 | Validation CC : 0.9612
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.7735e-01 | Validation Loss : 8.1682e-01
Training F1 Macro: 0.8929 | Validation F1 Macro : 0.6656
Training F1 Micro: 0.9016 | Validation F1 Micro : 0.6660
Epoch 47, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.7115e-03 | Validation Loss : 1.0414e-02
Training CC : 0.9836 | Validation CC : 0.9609
** Classification Losses **
Training Loss : 1.6110e-01 | Validation Loss : 8.0459e-01
Training F1 Macro: 0.9162 | Validation F1 Macro : 0.6898
Training F1 Micro: 0.9203 | Validation F1 Micro : 0.6900
Epoch 47, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9191e-03 | Validation Loss : 1.0584e-02
Training CC : 0.9835 | Validation CC : 0.9612
** Classification Losses **
Training Loss : 1.2861e-01 | Validation Loss : 7.9698e-01
Training F1 Macro: 0.9240 | Validation F1 Macro : 0.6807
Training F1 Micro: 0.9358 | Validation F1 Micro : 0.6820
Epoch 47, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.4314e-03 | Validation Loss : 1.0281e-02
Training CC : 0.9843 | Validation CC : 0.9614
** Classification Losses **
Training Loss : 1.2555e-01 | Validation Loss : 8.0330e-01
Training F1 Macro: 0.9406 | Validation F1 Macro : 0.6758
Training F1 Micro: 0.9392 | Validation F1 Micro : 0.6780
Epoch 47, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1539e-03 | Validation Loss : 1.0315e-02
Training CC : 0.9851 | Validation CC : 0.9621
** Classification Losses **
Training Loss : 1.5740e-01 | Validation Loss : 8.3603e-01
Training F1 Macro: 0.9242 | Validation F1 Macro : 0.6553
Training F1 Micro: 0.9165 | Validation F1 Micro : 0.6560
Epoch 47, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1587e-03 | Validation Loss : 1.0259e-02
Training CC : 0.9851 | Validation CC : 0.9620
** Classification Losses **
Training Loss : 1.3513e-01 | Validation Loss : 8.2075e-01
Training F1 Macro: 0.9522 | Validation F1 Macro : 0.6599
Training F1 Micro: 0.9481 | Validation F1 Micro : 0.6620
Epoch 48, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0984e-03 | Validation Loss : 1.0284e-02
Training CC : 0.9837 | Validation CC : 0.9619
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.5428e-01 | Validation Loss : 8.4205e-01
Training F1 Macro: 0.9212 | Validation F1 Macro : 0.6582
Training F1 Micro: 0.9193 | Validation F1 Micro : 0.6580
Epoch 48, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.2070e-03 | Validation Loss : 1.0380e-02
Training CC : 0.9851 | Validation CC : 0.9615
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0655e-01 | Validation Loss : 8.7972e-01
Training F1 Macro: 0.9440 | Validation F1 Macro : 0.6316
Training F1 Micro: 0.9431 | Validation F1 Micro : 0.6300
Epoch 48, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.1873e-03 | Validation Loss : 1.0472e-02
Training CC : 0.9848 | Validation CC : 0.9612
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3877e-01 | Validation Loss : 9.0709e-01
Training F1 Macro: 0.9201 | Validation F1 Macro : 0.6334
Training F1 Micro: 0.9193 | Validation F1 Micro : 0.6360
Epoch 48, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5383e-03 | Validation Loss : 1.0566e-02
Training CC : 0.9840 | Validation CC : 0.9608
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.0548e-02 | Validation Loss : 8.8953e-01
Training F1 Macro: 0.9704 | Validation F1 Macro : 0.6262
Training F1 Micro: 0.9684 | Validation F1 Micro : 0.6280
Epoch 48, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5758e-03 | Validation Loss : 1.0665e-02
Training CC : 0.9837 | Validation CC : 0.9605
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2311e-01 | Validation Loss : 8.5462e-01
Training F1 Macro: 0.9548 | Validation F1 Macro : 0.6509
Training F1 Micro: 0.9498 | Validation F1 Micro : 0.6540
Epoch 49, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.2710e-03 | Validation Loss : 1.0405e-02
Training CC : 0.9846 | Validation CC : 0.9618
** Classification Losses **
Training Loss : 9.7918e-02 | Validation Loss : 8.3508e-01
Training F1 Macro: 0.9648 | Validation F1 Macro : 0.6521
Training F1 Micro: 0.9609 | Validation F1 Micro : 0.6520
Epoch 49, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.4393e-03 | Validation Loss : 1.0249e-02
Training CC : 0.9846 | Validation CC : 0.9620
** Classification Losses **
Training Loss : 9.5729e-02 | Validation Loss : 8.3707e-01
Training F1 Macro: 0.9763 | Validation F1 Macro : 0.6573
Training F1 Micro: 0.9713 | Validation F1 Micro : 0.6580
Epoch 49, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9717e-03 | Validation Loss : 1.0230e-02
Training CC : 0.9855 | Validation CC : 0.9622
** Classification Losses **
Training Loss : 1.1090e-01 | Validation Loss : 8.3701e-01
Training F1 Macro: 0.9384 | Validation F1 Macro : 0.6700
Training F1 Micro: 0.9534 | Validation F1 Micro : 0.6740
Epoch 49, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9482e-03 | Validation Loss : 1.0100e-02
Training CC : 0.9857 | Validation CC : 0.9626
** Classification Losses **
Training Loss : 1.9180e-01 | Validation Loss : 8.3105e-01
Training F1 Macro: 0.8536 | Validation F1 Macro : 0.6627
Training F1 Micro: 0.8889 | Validation F1 Micro : 0.6640
Epoch 49, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1349e-03 | Validation Loss : 1.0093e-02
Training CC : 0.9857 | Validation CC : 0.9631
** Classification Losses **
Training Loss : 1.1157e-01 | Validation Loss : 8.3514e-01
Training F1 Macro: 0.9467 | Validation F1 Macro : 0.6570
Training F1 Micro: 0.9421 | Validation F1 Micro : 0.6580
Epoch 50, of 50 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.7945e-03 | Validation Loss : 1.0133e-02
Training CC : 0.9863 | Validation CC : 0.9630
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2635e-01 | Validation Loss : 8.5252e-01
Training F1 Macro: 0.9090 | Validation F1 Macro : 0.6538
Training F1 Micro: 0.9208 | Validation F1 Micro : 0.6540
Epoch 50, of 50 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.8101e-03 | Validation Loss : 1.0200e-02
Training CC : 0.9862 | Validation CC : 0.9627
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.4358e-01 | Validation Loss : 8.3129e-01
Training F1 Macro: 0.8696 | Validation F1 Macro : 0.6547
Training F1 Micro: 0.8716 | Validation F1 Micro : 0.6580
Epoch 50, of 50 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9694e-03 | Validation Loss : 1.0274e-02
Training CC : 0.9858 | Validation CC : 0.9624
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3971e-01 | Validation Loss : 8.3678e-01
Training F1 Macro: 0.9404 | Validation F1 Macro : 0.6499
Training F1 Micro: 0.9438 | Validation F1 Micro : 0.6520
Epoch 50, of 50 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.0473e-03 | Validation Loss : 1.0330e-02
Training CC : 0.9855 | Validation CC : 0.9622
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3552e-01 | Validation Loss : 8.4466e-01
Training F1 Macro: 0.9542 | Validation F1 Macro : 0.6262
Training F1 Micro: 0.9540 | Validation F1 Micro : 0.6260
Epoch 50, of 50 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.3496e-03 | Validation Loss : 1.0414e-02
Training CC : 0.9848 | Validation CC : 0.9619
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.8723e-02 | Validation Loss : 8.0752e-01
Training F1 Macro: 0.9779 | Validation F1 Macro : 0.6700
Training F1 Micro: 0.9814 | Validation F1 Micro : 0.6680
Visualize the loss function progression
[7]:
plots.plot_training_results_segmentation(rv[2]).show()
plots.plot_training_results_regression(rv[1]).show()
We pass over the test data again and collect stuff so we can inspect what is happening.
[8]:
results = []
pres = []
latent = []
true_lbl = []
inp_img = []
for batch in test_loader:
true_lbl.append(batch[1])
with torch.no_grad():
inp_img.append(batch[0].cpu())
res, ps = autoencoder(batch[0].to("cuda:0"))
lt = autoencoder.encode(batch[0].to("cuda:0"))
results.append(res.cpu())
latent.append(lt.cpu())
pres.append(ps.cpu())
results = torch.cat(results, dim=0)
latent = torch.cat(latent, dim=0)
pres = nn.Softmax(1)(torch.cat(pres,dim=0))
true_lbl = torch.cat(true_lbl, dim=0)
inp_img = torch.cat(inp_img, dim=0)
Lets have a look and see what we get; lets make a quick plotting utility.
[9]:
count = 0
for img, cc, tlbl, ori, ltl in zip(results, pres, true_lbl, inp_img, latent):
if True: #tlbl.item() < 0:
paic.plot_autoencoder_and_label_results(input_img=ori.numpy()[0,...],
output_img=img.numpy()[0,...],
p_classification=cc.numpy(),
class_names=["Rectangle","Disc","Triangle","Annulus"])
plt.show()
count += 1
if count > 15:
break
















As you see, we are doing reasonably well, but lets inspect the latent space and see whats going on. One can run a PCA / SVD on the latent vectors, but this is not very informative because of the dimensionality of the system. Instead, we run a UMAP and see how things look in 2D.
It is interestiong to re-run the notebook, but without using any labels during optimization. To make this happen, just set the number of epochs to 1 and the number of mini epochs to 100 (or so). In that way, it will not reach the classification minimization. The resulting latent space will be less neat.
[10]:
latent = einops.rearrange(latent, "N C Y X -> N (C Y X)")
umapper = umap.UMAP(min_dist=0, n_neighbors=35)
X = umapper.fit_transform(latent.numpy())
Lets get the labels and see what we have.
We make two plots: First, we show the umapped latent space with the given labels and the unknown labels, and second, we will show the inferred / guessed labels.
[ ]:
[11]:
infered_labels = torch.argmax(pres, dim=1).numpy()+1
for lbl in [0,1,2,3,4]:
sel = true_lbl.numpy()[:,0]+1==lbl
ms=1
if lbl==0:
ms=0.5
plt.plot(X[sel,0], X[sel,1], '.', markersize=ms)
plt.legend(["UNKNOWN","Rectangles","Discs","Triangles","Annuli"])
plt.title("Given labels in latent space")
plt.show()
for lbl in [0,1,2,3,4]:
sel = infered_labels==lbl
ms=1
plt.plot(X[sel,0], X[sel,1], '.', markersize=ms)
plt.legend(["UNKNOWN","Rectangles","Discs","Triangles","Annuli"])
plt.title("Predicted labels in latent space")
plt.show()


As you can see, we are not perfect, lets have a look at some specific cases
[12]:
count = 0
for img, cc, tlbl, ori, pl in zip(results, pres, true_lbl, inp_img, infered_labels):
if int(tlbl[0]) != pl-1:
paic.plot_autoencoder_and_label_results(input_img=ori.numpy()[0,...],
output_img=img.numpy()[0,...],
p_classification=cc.numpy(),
class_names=["Rectangle","Disc","Triangle","Annulus"])
plt.show()
count += 1
if count > 5:
break






Last but not least, we can view our latent space
[13]:
fig = latent_space_viewer.build_latent_space_image_viewer(inp_img.numpy()[:,0,...],
X,
n_bins=50,
min_count=1,
max_count=1,
mode="nearest")

(Newer) Latent Space Exploration with Randomized Sparse Mixed Scale Autoencoders, regularized by the availability of image labels¶
Authors: Eric Roberts and Petrus Zwart
E-mail: PHZwart@lbl.gov, EJRoberts@lbl.gov ___
This notebook highlights some basic functionality with the pyMSDtorch package.
In this notebook we setup autoencoders, with the goal to explore the latent space it generates. In this case however, we will guide the formation of the latent space by including labels to specific images.
The autoencoders we use are based on randomly construct convolutional neural networks in which we can control the number of parameters it contains. This type of control can be beneficial when the amount of data on which one can train a network is not very voluminous, as it allows for better handles on overfitting.
The constructed latent space can be used for unsupervised and supervised exploration methods. In our limited experience, the classifiers that are trained come out of the data are reasonable, but can be improved upon using classic classification methods, as shown further.
[1]:
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from pyMSDtorch.core import helpers
from pyMSDtorch.core import train_scripts
from pyMSDtorch.core.networks import baggins
from pyMSDtorch.core.networks import SparseNet
from pyMSDtorch.test_data.twoD import random_shapes
from pyMSDtorch.core.utils import latent_space_viewer
from pyMSDtorch.viz_tools import plots
from pyMSDtorch.viz_tools import plot_autoencoder_image_classification as paic
import matplotlib.pyplot as plt
import matplotlib
from torch.utils.data import DataLoader, TensorDataset
import einops
import umap
<frozen importlib._bootstrap>:219: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
Create some data first
[2]:
N_train = 500
N_labeled = 200
N_test = 500
noise_level = 0.50
Nxy = 32
train_data = random_shapes.build_random_shape_set_numpy(n_imgs=N_train,
noise_level=noise_level,
n_xy=Nxy)
test_data = random_shapes.build_random_shape_set_numpy(n_imgs=N_test,
noise_level=noise_level,
n_xy=Nxy)
[3]:
plots.plot_shapes_data_numpy(train_data)
[4]:
which_one = "Noisy" #"GroundTruth"
batch_size = 100
loader_params = {'batch_size': batch_size,
'shuffle': True}
train_imgs = torch.Tensor(train_data[which_one]).unsqueeze(1)
train_labels = torch.Tensor(train_data["Label"]).unsqueeze(1)-1
train_labels[N_labeled:]=-1 # remove some labels to highlight 'mixed' training
Ttrain_data = TensorDataset(train_imgs,train_labels)
train_loader = DataLoader(Ttrain_data, **loader_params)
loader_params = {'batch_size': batch_size,
'shuffle': False}
test_images = torch.Tensor(test_data[which_one]).unsqueeze(1)
test_labels = torch.Tensor(test_data["Label"]).unsqueeze(1)-1
Ttest_data = TensorDataset( test_images, test_labels )
test_loader = DataLoader(Ttest_data, **loader_params)
Lets build an autoencoder first.
There are a number of parameters to play with that impact the size of the network:
- latent_shape: the spatial footprint of the image in latent space.
I don't recommend going below 4x4, because it interferes with the
dilation choices. This is a bit of a annoyiong feature, we need to fix that.
Its on the list.
- out_channels: the number of channels of the latent image. Determines the
dimension of latent space: (channels,latent_shape[-2], latent_shape[-1])
- depth: the depth of the random sparse convolutional encoder / decoder
- hidden channels: The number of channels put out per convolution.
- max_degree / min_degree : This determines how many connections you have per node.
Other parameters do not impact the size of the network dramatically / at all:
- in_shape: determined by the input shape of the image.
- dilations: the maximum dilation should not exceed the smallest image dimension.
- alpha_range: determines the type of graphs (wide vs skinny). When alpha is large,
the chances for skinny graphs to be generated increases.
We don't know which parameter choice is best, so we randomize it's choice.
- gamma_range: no effect unless the maximum degree and min_degree are far apart.
We don't know which parameter choice is best, so we randomize it's choice.
- pIL,pLO,IO: keep as is.
- stride_base: make sure your latent image size can be generated from the in_shape
by repeated division of with this number.
For the classification, specify the number of output classes. Here we work with 4 shapes, so set it to 4. The dropout rate governs the dropout layers in the classifier part of the networks and doesn’t affect the autoencoder part.
[5]:
autoencoders = []
N_models = 7
for ii in range(N_models):
torch.cuda.empty_cache()
autoencoder = SparseNet.SparseAEC(in_shape=(32, 32),
latent_shape=(8, 8),
out_classes=4,
depth=40,
dilations=[1,2,3],
hidden_channels=3,
out_channels=2,
alpha_range=(0.0, 0.25),
gamma_range=(0.0, 0.5),
max_degree=10, min_degree=10,
pIL=0.15,
pLO=0.15,
IO=False,
stride_base=2,
dropout_rate=0.15,)
autoencoders.append(autoencoder)
pytorch_total_params = helpers.count_parameters(autoencoder)
print( "Number of parameters:", pytorch_total_params)
Number of parameters: 238793
Number of parameters: 270307
Number of parameters: 258797
Number of parameters: 223225
Number of parameters: 237490
Number of parameters: 245600
Number of parameters: 238823
We define two optimizers, one for autoencoding and one for classification. They will be minimized consequetively instead of building a single sum of targets. This avoids choosing the right weight. The mini-epochs are the number of epochs it passes over the whole data set to optimize a single atrget function. The autoencoder is done first.
[6]:
for ii in range(N_models):
autoencoder = autoencoders[ii]
torch.cuda.empty_cache()
learning_rate = 1e-3
num_epochs=25
criterion_AE = nn.MSELoss()
optimizer_AE = optim.Adam(autoencoder.parameters(), lr=learning_rate)
criterion_label = nn.CrossEntropyLoss(ignore_index=-1)
optimizer_label = optim.Adam(autoencoder.parameters(), lr=learning_rate)
rv = train_scripts.autoencode_and_classify_training(net=autoencoder.to('cuda:0'),
trainloader=train_loader,
validationloader=test_loader,
macro_epochs=num_epochs,
mini_epochs=5,
criteria_autoencode=criterion_AE,
minimizer_autoencode=optimizer_AE,
criteria_classify=criterion_label,
minimizer_classify=optimizer_label,
device="cuda:0",
show=1,
clip_value=100.0)
plots.plot_training_results_segmentation(rv[2]).show()
plots.plot_training_results_regression(rv[1]).show()
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.3291e-01 | Validation Loss : 3.3059e-01
Training CC : 0.1441 | Validation CC : 0.3795
** Classification Losses **
Training Loss : 1.4778e+00 | Validation Loss : 1.4306e+00
Training F1 Macro: 0.1927 | Validation F1 Macro : 0.2313
Training F1 Micro: 0.2456 | Validation F1 Micro : 0.2640
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8571e-01 | Validation Loss : 2.2450e-01
Training CC : 0.4758 | Validation CC : 0.5568
** Classification Losses **
Training Loss : 1.4656e+00 | Validation Loss : 1.4262e+00
Training F1 Macro: 0.2161 | Validation F1 Macro : 0.2103
Training F1 Micro: 0.2470 | Validation F1 Micro : 0.2460
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.9252e-01 | Validation Loss : 1.5140e-01
Training CC : 0.6149 | Validation CC : 0.6500
** Classification Losses **
Training Loss : 1.4204e+00 | Validation Loss : 1.4326e+00
Training F1 Macro: 0.2511 | Validation F1 Macro : 0.2258
Training F1 Micro: 0.2833 | Validation F1 Micro : 0.2640
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.3008e-01 | Validation Loss : 1.0477e-01
Training CC : 0.6858 | Validation CC : 0.7037
** Classification Losses **
Training Loss : 1.4160e+00 | Validation Loss : 1.4166e+00
Training F1 Macro: 0.2035 | Validation F1 Macro : 0.2256
Training F1 Micro: 0.2544 | Validation F1 Micro : 0.2640
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.2090e-02 | Validation Loss : 7.7216e-02
Training CC : 0.7324 | Validation CC : 0.7439
** Classification Losses **
Training Loss : 1.4385e+00 | Validation Loss : 1.4156e+00
Training F1 Macro: 0.2179 | Validation F1 Macro : 0.2267
Training F1 Micro: 0.2657 | Validation F1 Micro : 0.2720
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.6554e-02 | Validation Loss : 7.9300e-02
Training CC : 0.7534 | Validation CC : 0.7396
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2471e+00 | Validation Loss : 1.0606e+00
Training F1 Macro: 0.3863 | Validation F1 Macro : 0.5573
Training F1 Micro: 0.4202 | Validation F1 Micro : 0.5640
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.8657e-02 | Validation Loss : 8.1138e-02
Training CC : 0.7475 | Validation CC : 0.7334
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.3420e-01 | Validation Loss : 9.1713e-01
Training F1 Macro: 0.6864 | Validation F1 Macro : 0.6663
Training F1 Micro: 0.6932 | Validation F1 Micro : 0.6600
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.0023e-02 | Validation Loss : 8.2387e-02
Training CC : 0.7422 | Validation CC : 0.7285
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.7384e-01 | Validation Loss : 8.1647e-01
Training F1 Macro: 0.7247 | Validation F1 Macro : 0.6938
Training F1 Micro: 0.7228 | Validation F1 Micro : 0.6900
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.1289e-02 | Validation Loss : 8.3476e-02
Training CC : 0.7374 | Validation CC : 0.7246
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.6634e-01 | Validation Loss : 7.0724e-01
Training F1 Macro: 0.7604 | Validation F1 Macro : 0.7382
Training F1 Micro: 0.7749 | Validation F1 Micro : 0.7360
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.2227e-02 | Validation Loss : 8.3983e-02
Training CC : 0.7341 | Validation CC : 0.7227
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.9215e-01 | Validation Loss : 7.2037e-01
Training F1 Macro: 0.7649 | Validation F1 Macro : 0.6774
Training F1 Micro: 0.7674 | Validation F1 Micro : 0.6740
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.3738e-02 | Validation Loss : 6.5154e-02
Training CC : 0.7497 | Validation CC : 0.7624
** Classification Losses **
Training Loss : 6.1459e-01 | Validation Loss : 6.8213e-01
Training F1 Macro: 0.7392 | Validation F1 Macro : 0.7191
Training F1 Micro: 0.7400 | Validation F1 Micro : 0.7220
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.0560e-02 | Validation Loss : 5.7849e-02
Training CC : 0.7813 | Validation CC : 0.7854
** Classification Losses **
Training Loss : 5.1520e-01 | Validation Loss : 7.0926e-01
Training F1 Macro: 0.8047 | Validation F1 Macro : 0.7097
Training F1 Micro: 0.8048 | Validation F1 Micro : 0.7080
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.4756e-02 | Validation Loss : 5.4303e-02
Training CC : 0.8018 | Validation CC : 0.8028
** Classification Losses **
Training Loss : 5.3589e-01 | Validation Loss : 7.0321e-01
Training F1 Macro: 0.8129 | Validation F1 Macro : 0.7097
Training F1 Micro: 0.8193 | Validation F1 Micro : 0.7060
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.1570e-02 | Validation Loss : 5.1390e-02
Training CC : 0.8160 | Validation CC : 0.8137
** Classification Losses **
Training Loss : 5.7781e-01 | Validation Loss : 6.8949e-01
Training F1 Macro: 0.7933 | Validation F1 Macro : 0.7378
Training F1 Micro: 0.7917 | Validation F1 Micro : 0.7380
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8706e-02 | Validation Loss : 4.8585e-02
Training CC : 0.8261 | Validation CC : 0.8239
** Classification Losses **
Training Loss : 5.8905e-01 | Validation Loss : 6.9312e-01
Training F1 Macro: 0.7981 | Validation F1 Macro : 0.7640
Training F1 Micro: 0.7995 | Validation F1 Micro : 0.7620
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.7162e-02 | Validation Loss : 4.8852e-02
Training CC : 0.8319 | Validation CC : 0.8228
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.5470e-01 | Validation Loss : 6.4983e-01
Training F1 Macro: 0.7846 | Validation F1 Macro : 0.7449
Training F1 Micro: 0.7854 | Validation F1 Micro : 0.7460
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.7561e-02 | Validation Loss : 4.9256e-02
Training CC : 0.8302 | Validation CC : 0.8210
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.8117e-01 | Validation Loss : 5.9962e-01
Training F1 Macro: 0.7524 | Validation F1 Macro : 0.7495
Training F1 Micro: 0.7574 | Validation F1 Micro : 0.7500
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.7926e-02 | Validation Loss : 5.0149e-02
Training CC : 0.8281 | Validation CC : 0.8172
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0104e-01 | Validation Loss : 5.9876e-01
Training F1 Macro: 0.8472 | Validation F1 Macro : 0.7410
Training F1 Micro: 0.8424 | Validation F1 Micro : 0.7380
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8940e-02 | Validation Loss : 5.1173e-02
Training CC : 0.8240 | Validation CC : 0.8130
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.0705e-01 | Validation Loss : 5.1841e-01
Training F1 Macro: 0.7356 | Validation F1 Macro : 0.8098
Training F1 Micro: 0.7326 | Validation F1 Micro : 0.8040
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.0025e-02 | Validation Loss : 5.1955e-02
Training CC : 0.8198 | Validation CC : 0.8097
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2570e-01 | Validation Loss : 5.2499e-01
Training F1 Macro: 0.8125 | Validation F1 Macro : 0.7824
Training F1 Micro: 0.8119 | Validation F1 Micro : 0.7780
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8555e-02 | Validation Loss : 4.7966e-02
Training CC : 0.8260 | Validation CC : 0.8256
** Classification Losses **
Training Loss : 4.5043e-01 | Validation Loss : 5.4480e-01
Training F1 Macro: 0.8017 | Validation F1 Macro : 0.7936
Training F1 Micro: 0.7988 | Validation F1 Micro : 0.7920
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.5060e-02 | Validation Loss : 4.5284e-02
Training CC : 0.8387 | Validation CC : 0.8351
** Classification Losses **
Training Loss : 4.0834e-01 | Validation Loss : 4.9639e-01
Training F1 Macro: 0.8013 | Validation F1 Macro : 0.8173
Training F1 Micro: 0.7984 | Validation F1 Micro : 0.8120
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.3447e-02 | Validation Loss : 4.3533e-02
Training CC : 0.8448 | Validation CC : 0.8422
** Classification Losses **
Training Loss : 3.6569e-01 | Validation Loss : 5.3628e-01
Training F1 Macro: 0.8340 | Validation F1 Macro : 0.7848
Training F1 Micro: 0.8342 | Validation F1 Micro : 0.7780
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1518e-02 | Validation Loss : 4.2261e-02
Training CC : 0.8522 | Validation CC : 0.8473
** Classification Losses **
Training Loss : 4.0060e-01 | Validation Loss : 5.3324e-01
Training F1 Macro: 0.8183 | Validation F1 Macro : 0.7933
Training F1 Micro: 0.8201 | Validation F1 Micro : 0.7900
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.0117e-02 | Validation Loss : 4.1035e-02
Training CC : 0.8573 | Validation CC : 0.8522
** Classification Losses **
Training Loss : 4.0776e-01 | Validation Loss : 5.3571e-01
Training F1 Macro: 0.7961 | Validation F1 Macro : 0.7888
Training F1 Micro: 0.8014 | Validation F1 Micro : 0.7880
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9818e-02 | Validation Loss : 4.1122e-02
Training CC : 0.8595 | Validation CC : 0.8517
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7492e-01 | Validation Loss : 5.0097e-01
Training F1 Macro: 0.7470 | Validation F1 Macro : 0.8052
Training F1 Micro: 0.7557 | Validation F1 Micro : 0.8040
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9503e-02 | Validation Loss : 4.1425e-02
Training CC : 0.8596 | Validation CC : 0.8504
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8426e-01 | Validation Loss : 5.1564e-01
Training F1 Macro: 0.7335 | Validation F1 Macro : 0.7851
Training F1 Micro: 0.7505 | Validation F1 Micro : 0.7860
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.0428e-02 | Validation Loss : 4.1916e-02
Training CC : 0.8571 | Validation CC : 0.8484
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6817e-01 | Validation Loss : 5.4150e-01
Training F1 Macro: 0.8120 | Validation F1 Macro : 0.7543
Training F1 Micro: 0.8114 | Validation F1 Micro : 0.7500
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.0598e-02 | Validation Loss : 4.2434e-02
Training CC : 0.8557 | Validation CC : 0.8463
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9005e-01 | Validation Loss : 4.5488e-01
Training F1 Macro: 0.7918 | Validation F1 Macro : 0.8299
Training F1 Micro: 0.7916 | Validation F1 Micro : 0.8300
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.1184e-02 | Validation Loss : 4.2671e-02
Training CC : 0.8536 | Validation CC : 0.8454
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6833e-01 | Validation Loss : 4.8190e-01
Training F1 Macro: 0.8000 | Validation F1 Macro : 0.7867
Training F1 Micro: 0.7983 | Validation F1 Micro : 0.7820
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.0104e-02 | Validation Loss : 4.0947e-02
Training CC : 0.8575 | Validation CC : 0.8527
** Classification Losses **
Training Loss : 4.1435e-01 | Validation Loss : 4.6529e-01
Training F1 Macro: 0.7817 | Validation F1 Macro : 0.8125
Training F1 Micro: 0.7909 | Validation F1 Micro : 0.8100
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.8693e-02 | Validation Loss : 3.9705e-02
Training CC : 0.8630 | Validation CC : 0.8572
** Classification Losses **
Training Loss : 3.4552e-01 | Validation Loss : 4.5888e-01
Training F1 Macro: 0.8386 | Validation F1 Macro : 0.8158
Training F1 Micro: 0.8414 | Validation F1 Micro : 0.8160
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.7703e-02 | Validation Loss : 3.8703e-02
Training CC : 0.8667 | Validation CC : 0.8611
** Classification Losses **
Training Loss : 4.9632e-01 | Validation Loss : 4.7334e-01
Training F1 Macro: 0.7385 | Validation F1 Macro : 0.7859
Training F1 Micro: 0.7313 | Validation F1 Micro : 0.7860
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.7040e-02 | Validation Loss : 3.7884e-02
Training CC : 0.8700 | Validation CC : 0.8643
** Classification Losses **
Training Loss : 4.0643e-01 | Validation Loss : 4.8397e-01
Training F1 Macro: 0.7774 | Validation F1 Macro : 0.7829
Training F1 Micro: 0.7803 | Validation F1 Micro : 0.7760
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6359e-02 | Validation Loss : 3.7141e-02
Training CC : 0.8727 | Validation CC : 0.8671
** Classification Losses **
Training Loss : 4.3432e-01 | Validation Loss : 4.8012e-01
Training F1 Macro: 0.7945 | Validation F1 Macro : 0.7837
Training F1 Micro: 0.7918 | Validation F1 Micro : 0.7840
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5705e-02 | Validation Loss : 3.7189e-02
Training CC : 0.8748 | Validation CC : 0.8669
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1798e-01 | Validation Loss : 5.2188e-01
Training F1 Macro: 0.8282 | Validation F1 Macro : 0.7602
Training F1 Micro: 0.8314 | Validation F1 Micro : 0.7520
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5517e-02 | Validation Loss : 3.7349e-02
Training CC : 0.8749 | Validation CC : 0.8662
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7565e-01 | Validation Loss : 5.2392e-01
Training F1 Macro: 0.8198 | Validation F1 Macro : 0.7744
Training F1 Micro: 0.8205 | Validation F1 Micro : 0.7740
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5712e-02 | Validation Loss : 3.7610e-02
Training CC : 0.8741 | Validation CC : 0.8652
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7113e-01 | Validation Loss : 4.9920e-01
Training F1 Macro: 0.8113 | Validation F1 Macro : 0.7757
Training F1 Micro: 0.8111 | Validation F1 Micro : 0.7740
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6207e-02 | Validation Loss : 3.8039e-02
Training CC : 0.8726 | Validation CC : 0.8635
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5126e-01 | Validation Loss : 4.6536e-01
Training F1 Macro: 0.7394 | Validation F1 Macro : 0.7788
Training F1 Micro: 0.7416 | Validation F1 Micro : 0.7780
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6613e-02 | Validation Loss : 3.8441e-02
Training CC : 0.8709 | Validation CC : 0.8619
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5643e-01 | Validation Loss : 4.5691e-01
Training F1 Macro: 0.7755 | Validation F1 Macro : 0.8070
Training F1 Micro: 0.7892 | Validation F1 Micro : 0.8040
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5970e-02 | Validation Loss : 3.7471e-02
Training CC : 0.8732 | Validation CC : 0.8662
** Classification Losses **
Training Loss : 3.7093e-01 | Validation Loss : 5.1996e-01
Training F1 Macro: 0.7847 | Validation F1 Macro : 0.7550
Training F1 Micro: 0.7877 | Validation F1 Micro : 0.7500
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5187e-02 | Validation Loss : 3.6510e-02
Training CC : 0.8765 | Validation CC : 0.8694
** Classification Losses **
Training Loss : 4.1121e-01 | Validation Loss : 4.9189e-01
Training F1 Macro: 0.8169 | Validation F1 Macro : 0.7783
Training F1 Micro: 0.8100 | Validation F1 Micro : 0.7760
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4385e-02 | Validation Loss : 3.5989e-02
Training CC : 0.8792 | Validation CC : 0.8723
** Classification Losses **
Training Loss : 3.3013e-01 | Validation Loss : 5.1617e-01
Training F1 Macro: 0.8102 | Validation F1 Macro : 0.7545
Training F1 Micro: 0.8193 | Validation F1 Micro : 0.7520
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3722e-02 | Validation Loss : 3.5365e-02
Training CC : 0.8819 | Validation CC : 0.8738
** Classification Losses **
Training Loss : 3.6229e-01 | Validation Loss : 5.3433e-01
Training F1 Macro: 0.7964 | Validation F1 Macro : 0.7440
Training F1 Micro: 0.8013 | Validation F1 Micro : 0.7380
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3240e-02 | Validation Loss : 3.4806e-02
Training CC : 0.8836 | Validation CC : 0.8765
** Classification Losses **
Training Loss : 3.3275e-01 | Validation Loss : 4.8706e-01
Training F1 Macro: 0.8403 | Validation F1 Macro : 0.7903
Training F1 Micro: 0.8434 | Validation F1 Micro : 0.7860
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2839e-02 | Validation Loss : 3.4869e-02
Training CC : 0.8851 | Validation CC : 0.8762
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7283e-01 | Validation Loss : 4.8104e-01
Training F1 Macro: 0.7557 | Validation F1 Macro : 0.7880
Training F1 Micro: 0.7479 | Validation F1 Micro : 0.7900
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2954e-02 | Validation Loss : 3.4982e-02
Training CC : 0.8848 | Validation CC : 0.8757
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6545e-01 | Validation Loss : 4.5744e-01
Training F1 Macro: 0.8106 | Validation F1 Macro : 0.7965
Training F1 Micro: 0.8041 | Validation F1 Micro : 0.7940
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3925e-02 | Validation Loss : 3.5100e-02
Training CC : 0.8826 | Validation CC : 0.8752
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3902e-01 | Validation Loss : 5.0492e-01
Training F1 Macro: 0.7711 | Validation F1 Macro : 0.7761
Training F1 Micro: 0.7692 | Validation F1 Micro : 0.7760
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3218e-02 | Validation Loss : 3.5211e-02
Training CC : 0.8837 | Validation CC : 0.8747
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2974e-01 | Validation Loss : 4.9186e-01
Training F1 Macro: 0.7832 | Validation F1 Macro : 0.7719
Training F1 Micro: 0.7931 | Validation F1 Micro : 0.7660
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3189e-02 | Validation Loss : 3.5346e-02
Training CC : 0.8835 | Validation CC : 0.8741
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2953e-01 | Validation Loss : 4.9525e-01
Training F1 Macro: 0.7670 | Validation F1 Macro : 0.7805
Training F1 Micro: 0.7700 | Validation F1 Micro : 0.7800
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3152e-02 | Validation Loss : 3.4590e-02
Training CC : 0.8840 | Validation CC : 0.8769
** Classification Losses **
Training Loss : 3.9486e-01 | Validation Loss : 4.8384e-01
Training F1 Macro: 0.7777 | Validation F1 Macro : 0.7728
Training F1 Micro: 0.7924 | Validation F1 Micro : 0.7700
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2496e-02 | Validation Loss : 3.4150e-02
Training CC : 0.8865 | Validation CC : 0.8789
** Classification Losses **
Training Loss : 4.3200e-01 | Validation Loss : 4.7584e-01
Training F1 Macro: 0.7559 | Validation F1 Macro : 0.8103
Training F1 Micro: 0.7612 | Validation F1 Micro : 0.8080
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2013e-02 | Validation Loss : 3.3698e-02
Training CC : 0.8883 | Validation CC : 0.8805
** Classification Losses **
Training Loss : 3.9481e-01 | Validation Loss : 5.0424e-01
Training F1 Macro: 0.7750 | Validation F1 Macro : 0.7660
Training F1 Micro: 0.7920 | Validation F1 Micro : 0.7580
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1643e-02 | Validation Loss : 3.3260e-02
Training CC : 0.8899 | Validation CC : 0.8819
** Classification Losses **
Training Loss : 3.4861e-01 | Validation Loss : 5.2187e-01
Training F1 Macro: 0.8180 | Validation F1 Macro : 0.7492
Training F1 Micro: 0.8230 | Validation F1 Micro : 0.7440
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1421e-02 | Validation Loss : 3.3014e-02
Training CC : 0.8911 | Validation CC : 0.8830
** Classification Losses **
Training Loss : 3.9112e-01 | Validation Loss : 5.3061e-01
Training F1 Macro: 0.7690 | Validation F1 Macro : 0.7542
Training F1 Micro: 0.7688 | Validation F1 Micro : 0.7500
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0885e-02 | Validation Loss : 3.3098e-02
Training CC : 0.8923 | Validation CC : 0.8826
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9689e-01 | Validation Loss : 5.2818e-01
Training F1 Macro: 0.7607 | Validation F1 Macro : 0.7608
Training F1 Micro: 0.7697 | Validation F1 Micro : 0.7560
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1017e-02 | Validation Loss : 3.3193e-02
Training CC : 0.8918 | Validation CC : 0.8822
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8250e-01 | Validation Loss : 4.6977e-01
Training F1 Macro: 0.7906 | Validation F1 Macro : 0.7883
Training F1 Micro: 0.7920 | Validation F1 Micro : 0.7820
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1221e-02 | Validation Loss : 3.3281e-02
Training CC : 0.8913 | Validation CC : 0.8818
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7743e-01 | Validation Loss : 4.9804e-01
Training F1 Macro: 0.8085 | Validation F1 Macro : 0.7825
Training F1 Micro: 0.8054 | Validation F1 Micro : 0.7720
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1340e-02 | Validation Loss : 3.3391e-02
Training CC : 0.8908 | Validation CC : 0.8814
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3297e-01 | Validation Loss : 5.0085e-01
Training F1 Macro: 0.7833 | Validation F1 Macro : 0.7659
Training F1 Micro: 0.7810 | Validation F1 Micro : 0.7580
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1321e-02 | Validation Loss : 3.3457e-02
Training CC : 0.8908 | Validation CC : 0.8812
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1234e-01 | Validation Loss : 4.6862e-01
Training F1 Macro: 0.7977 | Validation F1 Macro : 0.7759
Training F1 Micro: 0.7943 | Validation F1 Micro : 0.7760
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0935e-02 | Validation Loss : 3.2933e-02
Training CC : 0.8918 | Validation CC : 0.8834
** Classification Losses **
Training Loss : 3.4600e-01 | Validation Loss : 4.6911e-01
Training F1 Macro: 0.8409 | Validation F1 Macro : 0.7853
Training F1 Micro: 0.8420 | Validation F1 Micro : 0.7780
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0638e-02 | Validation Loss : 3.2596e-02
Training CC : 0.8934 | Validation CC : 0.8845
** Classification Losses **
Training Loss : 3.7538e-01 | Validation Loss : 5.0974e-01
Training F1 Macro: 0.7852 | Validation F1 Macro : 0.7603
Training F1 Micro: 0.7796 | Validation F1 Micro : 0.7600
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0095e-02 | Validation Loss : 3.2180e-02
Training CC : 0.8950 | Validation CC : 0.8862
** Classification Losses **
Training Loss : 3.6182e-01 | Validation Loss : 4.6430e-01
Training F1 Macro: 0.7888 | Validation F1 Macro : 0.7948
Training F1 Micro: 0.7979 | Validation F1 Micro : 0.7900
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9889e-02 | Validation Loss : 3.1844e-02
Training CC : 0.8961 | Validation CC : 0.8874
** Classification Losses **
Training Loss : 3.8944e-01 | Validation Loss : 4.9234e-01
Training F1 Macro: 0.7583 | Validation F1 Macro : 0.7671
Training F1 Micro: 0.7853 | Validation F1 Micro : 0.7620
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9351e-02 | Validation Loss : 3.1586e-02
Training CC : 0.8978 | Validation CC : 0.8885
** Classification Losses **
Training Loss : 4.6734e-01 | Validation Loss : 4.4817e-01
Training F1 Macro: 0.7581 | Validation F1 Macro : 0.8033
Training F1 Micro: 0.7529 | Validation F1 Micro : 0.8000
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9104e-02 | Validation Loss : 3.1630e-02
Training CC : 0.8986 | Validation CC : 0.8883
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7653e-01 | Validation Loss : 4.5028e-01
Training F1 Macro: 0.7856 | Validation F1 Macro : 0.7992
Training F1 Micro: 0.7896 | Validation F1 Micro : 0.7980
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9069e-02 | Validation Loss : 3.1683e-02
Training CC : 0.8987 | Validation CC : 0.8880
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9580e-01 | Validation Loss : 4.5434e-01
Training F1 Macro: 0.7977 | Validation F1 Macro : 0.7977
Training F1 Micro: 0.8029 | Validation F1 Micro : 0.7960
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9308e-02 | Validation Loss : 3.1760e-02
Training CC : 0.8981 | Validation CC : 0.8877
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.6115e-01 | Validation Loss : 5.0435e-01
Training F1 Macro: 0.7612 | Validation F1 Macro : 0.7501
Training F1 Micro: 0.7612 | Validation F1 Micro : 0.7500
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9382e-02 | Validation Loss : 3.1857e-02
Training CC : 0.8978 | Validation CC : 0.8873
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7741e-01 | Validation Loss : 4.4365e-01
Training F1 Macro: 0.7461 | Validation F1 Macro : 0.8060
Training F1 Micro: 0.7491 | Validation F1 Micro : 0.8080
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9412e-02 | Validation Loss : 3.1946e-02
Training CC : 0.8976 | Validation CC : 0.8869
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6677e-01 | Validation Loss : 4.9405e-01
Training F1 Macro: 0.8205 | Validation F1 Macro : 0.7608
Training F1 Micro: 0.8226 | Validation F1 Micro : 0.7580
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9371e-02 | Validation Loss : 3.1557e-02
Training CC : 0.8979 | Validation CC : 0.8888
** Classification Losses **
Training Loss : 4.0280e-01 | Validation Loss : 5.3094e-01
Training F1 Macro: 0.7856 | Validation F1 Macro : 0.7685
Training F1 Micro: 0.7823 | Validation F1 Micro : 0.7640
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9020e-02 | Validation Loss : 3.1215e-02
Training CC : 0.8993 | Validation CC : 0.8897
** Classification Losses **
Training Loss : 3.4093e-01 | Validation Loss : 5.0928e-01
Training F1 Macro: 0.8174 | Validation F1 Macro : 0.7528
Training F1 Micro: 0.8147 | Validation F1 Micro : 0.7480
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8917e-02 | Validation Loss : 3.0945e-02
Training CC : 0.9001 | Validation CC : 0.8908
** Classification Losses **
Training Loss : 3.7552e-01 | Validation Loss : 4.7753e-01
Training F1 Macro: 0.7893 | Validation F1 Macro : 0.7853
Training F1 Micro: 0.7924 | Validation F1 Micro : 0.7800
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8627e-02 | Validation Loss : 3.0752e-02
Training CC : 0.9012 | Validation CC : 0.8917
** Classification Losses **
Training Loss : 3.9733e-01 | Validation Loss : 5.0863e-01
Training F1 Macro: 0.7820 | Validation F1 Macro : 0.7657
Training F1 Micro: 0.7968 | Validation F1 Micro : 0.7660
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9048e-02 | Validation Loss : 3.0505e-02
Training CC : 0.9011 | Validation CC : 0.8923
** Classification Losses **
Training Loss : 3.9624e-01 | Validation Loss : 5.0153e-01
Training F1 Macro: 0.7720 | Validation F1 Macro : 0.7692
Training F1 Micro: 0.7704 | Validation F1 Micro : 0.7680
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8011e-02 | Validation Loss : 3.0555e-02
Training CC : 0.9030 | Validation CC : 0.8921
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5623e-01 | Validation Loss : 5.1162e-01
Training F1 Macro: 0.8093 | Validation F1 Macro : 0.7582
Training F1 Micro: 0.8016 | Validation F1 Micro : 0.7580
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7988e-02 | Validation Loss : 3.0637e-02
Training CC : 0.9030 | Validation CC : 0.8918
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.8663e-01 | Validation Loss : 5.0287e-01
Training F1 Macro: 0.8246 | Validation F1 Macro : 0.7692
Training F1 Micro: 0.8233 | Validation F1 Micro : 0.7680
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8283e-02 | Validation Loss : 3.0750e-02
Training CC : 0.9022 | Validation CC : 0.8913
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8239e-01 | Validation Loss : 4.6863e-01
Training F1 Macro: 0.7883 | Validation F1 Macro : 0.7911
Training F1 Micro: 0.7896 | Validation F1 Micro : 0.7900
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8408e-02 | Validation Loss : 3.0904e-02
Training CC : 0.9018 | Validation CC : 0.8908
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9966e-01 | Validation Loss : 5.5393e-01
Training F1 Macro: 0.7666 | Validation F1 Macro : 0.7317
Training F1 Micro: 0.7655 | Validation F1 Micro : 0.7280
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8288e-02 | Validation Loss : 3.1051e-02
Training CC : 0.9018 | Validation CC : 0.8902
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2710e-01 | Validation Loss : 4.4294e-01
Training F1 Macro: 0.8375 | Validation F1 Macro : 0.8005
Training F1 Micro: 0.8408 | Validation F1 Micro : 0.7980
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8387e-02 | Validation Loss : 3.0717e-02
Training CC : 0.9018 | Validation CC : 0.8924
** Classification Losses **
Training Loss : 3.4569e-01 | Validation Loss : 5.2168e-01
Training F1 Macro: 0.7730 | Validation F1 Macro : 0.7350
Training F1 Micro: 0.8001 | Validation F1 Micro : 0.7320
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8533e-02 | Validation Loss : 3.0209e-02
Training CC : 0.9027 | Validation CC : 0.8935
** Classification Losses **
Training Loss : 3.6081e-01 | Validation Loss : 5.3385e-01
Training F1 Macro: 0.7791 | Validation F1 Macro : 0.7307
Training F1 Micro: 0.7792 | Validation F1 Micro : 0.7300
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7652e-02 | Validation Loss : 3.0069e-02
Training CC : 0.9045 | Validation CC : 0.8945
** Classification Losses **
Training Loss : 4.1609e-01 | Validation Loss : 4.8899e-01
Training F1 Macro: 0.7677 | Validation F1 Macro : 0.7652
Training F1 Micro: 0.7726 | Validation F1 Micro : 0.7640
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7985e-02 | Validation Loss : 2.9846e-02
Training CC : 0.9046 | Validation CC : 0.8952
** Classification Losses **
Training Loss : 3.9889e-01 | Validation Loss : 4.6881e-01
Training F1 Macro: 0.8206 | Validation F1 Macro : 0.7963
Training F1 Micro: 0.8246 | Validation F1 Micro : 0.7980
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6982e-02 | Validation Loss : 2.9557e-02
Training CC : 0.9067 | Validation CC : 0.8959
** Classification Losses **
Training Loss : 3.0181e-01 | Validation Loss : 4.8571e-01
Training F1 Macro: 0.8351 | Validation F1 Macro : 0.7661
Training F1 Micro: 0.8359 | Validation F1 Micro : 0.7660
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6800e-02 | Validation Loss : 2.9608e-02
Training CC : 0.9072 | Validation CC : 0.8957
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8673e-01 | Validation Loss : 5.2110e-01
Training F1 Macro: 0.7821 | Validation F1 Macro : 0.7635
Training F1 Micro: 0.7867 | Validation F1 Micro : 0.7620
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6776e-02 | Validation Loss : 2.9695e-02
Training CC : 0.9072 | Validation CC : 0.8953
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6516e-01 | Validation Loss : 5.1110e-01
Training F1 Macro: 0.8000 | Validation F1 Macro : 0.7565
Training F1 Micro: 0.8068 | Validation F1 Micro : 0.7540
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7005e-02 | Validation Loss : 2.9821e-02
Training CC : 0.9066 | Validation CC : 0.8948
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4320e-01 | Validation Loss : 4.8077e-01
Training F1 Macro: 0.8234 | Validation F1 Macro : 0.7907
Training F1 Micro: 0.8253 | Validation F1 Micro : 0.7880
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7406e-02 | Validation Loss : 2.9954e-02
Training CC : 0.9056 | Validation CC : 0.8943
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4701e-01 | Validation Loss : 5.4485e-01
Training F1 Macro: 0.7564 | Validation F1 Macro : 0.7594
Training F1 Micro: 0.7707 | Validation F1 Micro : 0.7560
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7147e-02 | Validation Loss : 3.0055e-02
Training CC : 0.9059 | Validation CC : 0.8940
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8204e-01 | Validation Loss : 5.2446e-01
Training F1 Macro: 0.7798 | Validation F1 Macro : 0.7596
Training F1 Micro: 0.7806 | Validation F1 Micro : 0.7580
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7455e-02 | Validation Loss : 2.9704e-02
Training CC : 0.9058 | Validation CC : 0.8956
** Classification Losses **
Training Loss : 3.7615e-01 | Validation Loss : 5.0130e-01
Training F1 Macro: 0.7872 | Validation F1 Macro : 0.7695
Training F1 Micro: 0.7900 | Validation F1 Micro : 0.7660
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7181e-02 | Validation Loss : 2.9445e-02
Training CC : 0.9068 | Validation CC : 0.8963
** Classification Losses **
Training Loss : 3.0066e-01 | Validation Loss : 4.7235e-01
Training F1 Macro: 0.8207 | Validation F1 Macro : 0.7976
Training F1 Micro: 0.8276 | Validation F1 Micro : 0.7940
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6452e-02 | Validation Loss : 2.9261e-02
Training CC : 0.9085 | Validation CC : 0.8971
** Classification Losses **
Training Loss : 3.2060e-01 | Validation Loss : 4.8217e-01
Training F1 Macro: 0.8111 | Validation F1 Macro : 0.7676
Training F1 Micro: 0.8118 | Validation F1 Micro : 0.7620
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6244e-02 | Validation Loss : 2.8989e-02
Training CC : 0.9093 | Validation CC : 0.8981
** Classification Losses **
Training Loss : 3.8329e-01 | Validation Loss : 4.7881e-01
Training F1 Macro: 0.7807 | Validation F1 Macro : 0.7903
Training F1 Micro: 0.8016 | Validation F1 Micro : 0.7880
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5961e-02 | Validation Loss : 2.8857e-02
Training CC : 0.9103 | Validation CC : 0.8987
** Classification Losses **
Training Loss : 3.5615e-01 | Validation Loss : 5.3346e-01
Training F1 Macro: 0.7937 | Validation F1 Macro : 0.7501
Training F1 Micro: 0.7920 | Validation F1 Micro : 0.7480
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5748e-02 | Validation Loss : 2.8890e-02
Training CC : 0.9109 | Validation CC : 0.8985
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.6092e-01 | Validation Loss : 5.1605e-01
Training F1 Macro: 0.7485 | Validation F1 Macro : 0.7515
Training F1 Micro: 0.7510 | Validation F1 Micro : 0.7540
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6108e-02 | Validation Loss : 2.8959e-02
Training CC : 0.9101 | Validation CC : 0.8982
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9065e-01 | Validation Loss : 5.4126e-01
Training F1 Macro: 0.8048 | Validation F1 Macro : 0.7505
Training F1 Micro: 0.8048 | Validation F1 Micro : 0.7480
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6449e-02 | Validation Loss : 2.9048e-02
Training CC : 0.9095 | Validation CC : 0.8979
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5963e-01 | Validation Loss : 5.3983e-01
Training F1 Macro: 0.7236 | Validation F1 Macro : 0.7492
Training F1 Micro: 0.7261 | Validation F1 Micro : 0.7420
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6097e-02 | Validation Loss : 2.9164e-02
Training CC : 0.9099 | Validation CC : 0.8974
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.0268e-01 | Validation Loss : 5.0704e-01
Training F1 Macro: 0.8324 | Validation F1 Macro : 0.7579
Training F1 Micro: 0.8447 | Validation F1 Micro : 0.7580
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6083e-02 | Validation Loss : 2.9301e-02
Training CC : 0.9098 | Validation CC : 0.8969
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1621e-01 | Validation Loss : 5.6059e-01
Training F1 Macro: 0.7865 | Validation F1 Macro : 0.7374
Training F1 Micro: 0.7946 | Validation F1 Micro : 0.7340
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6073e-02 | Validation Loss : 2.8913e-02
Training CC : 0.9099 | Validation CC : 0.8985
** Classification Losses **
Training Loss : 4.3218e-01 | Validation Loss : 5.3801e-01
Training F1 Macro: 0.7547 | Validation F1 Macro : 0.7443
Training F1 Micro: 0.7598 | Validation F1 Micro : 0.7400
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6458e-02 | Validation Loss : 2.8830e-02
Training CC : 0.9100 | Validation CC : 0.8989
** Classification Losses **
Training Loss : 3.2016e-01 | Validation Loss : 4.5355e-01
Training F1 Macro: 0.8140 | Validation F1 Macro : 0.7886
Training F1 Micro: 0.8307 | Validation F1 Micro : 0.7860
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5541e-02 | Validation Loss : 2.8582e-02
Training CC : 0.9119 | Validation CC : 0.8995
** Classification Losses **
Training Loss : 4.2739e-01 | Validation Loss : 5.3399e-01
Training F1 Macro: 0.7395 | Validation F1 Macro : 0.7459
Training F1 Micro: 0.7479 | Validation F1 Micro : 0.7440
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5826e-02 | Validation Loss : 2.8549e-02
Training CC : 0.9118 | Validation CC : 0.9002
** Classification Losses **
Training Loss : 3.8138e-01 | Validation Loss : 5.5683e-01
Training F1 Macro: 0.8015 | Validation F1 Macro : 0.7454
Training F1 Micro: 0.8048 | Validation F1 Micro : 0.7400
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5512e-02 | Validation Loss : 2.8287e-02
Training CC : 0.9126 | Validation CC : 0.9008
** Classification Losses **
Training Loss : 3.4386e-01 | Validation Loss : 5.4566e-01
Training F1 Macro: 0.8205 | Validation F1 Macro : 0.7415
Training F1 Micro: 0.8208 | Validation F1 Micro : 0.7400
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5443e-02 | Validation Loss : 2.8354e-02
Training CC : 0.9127 | Validation CC : 0.9005
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3610e-01 | Validation Loss : 5.1176e-01
Training F1 Macro: 0.8043 | Validation F1 Macro : 0.7552
Training F1 Micro: 0.7988 | Validation F1 Micro : 0.7520
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5059e-02 | Validation Loss : 2.8417e-02
Training CC : 0.9134 | Validation CC : 0.9003
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9074e-01 | Validation Loss : 5.1099e-01
Training F1 Macro: 0.7833 | Validation F1 Macro : 0.7632
Training F1 Micro: 0.7941 | Validation F1 Micro : 0.7580
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5147e-02 | Validation Loss : 2.8518e-02
Training CC : 0.9131 | Validation CC : 0.8999
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1938e-01 | Validation Loss : 5.1818e-01
Training F1 Macro: 0.7644 | Validation F1 Macro : 0.7705
Training F1 Micro: 0.7602 | Validation F1 Micro : 0.7700
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5199e-02 | Validation Loss : 2.8626e-02
Training CC : 0.9128 | Validation CC : 0.8994
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5925e-01 | Validation Loss : 5.0057e-01
Training F1 Macro: 0.7944 | Validation F1 Macro : 0.7792
Training F1 Micro: 0.8082 | Validation F1 Micro : 0.7760
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5400e-02 | Validation Loss : 2.8719e-02
Training CC : 0.9123 | Validation CC : 0.8991
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5009e-01 | Validation Loss : 5.0703e-01
Training F1 Macro: 0.7851 | Validation F1 Macro : 0.7667
Training F1 Micro: 0.7828 | Validation F1 Micro : 0.7620
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5126e-02 | Validation Loss : 2.8312e-02
Training CC : 0.9131 | Validation CC : 0.9008
** Classification Losses **
Training Loss : 4.1306e-01 | Validation Loss : 5.0534e-01
Training F1 Macro: 0.7735 | Validation F1 Macro : 0.7730
Training F1 Micro: 0.7640 | Validation F1 Micro : 0.7700
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5544e-02 | Validation Loss : 2.8078e-02
Training CC : 0.9130 | Validation CC : 0.9015
** Classification Losses **
Training Loss : 3.8431e-01 | Validation Loss : 4.9718e-01
Training F1 Macro: 0.7915 | Validation F1 Macro : 0.7796
Training F1 Micro: 0.7930 | Validation F1 Micro : 0.7700
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4675e-02 | Validation Loss : 2.8047e-02
Training CC : 0.9148 | Validation CC : 0.9017
** Classification Losses **
Training Loss : 3.7704e-01 | Validation Loss : 4.9967e-01
Training F1 Macro: 0.8110 | Validation F1 Macro : 0.7722
Training F1 Micro: 0.8103 | Validation F1 Micro : 0.7660
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4606e-02 | Validation Loss : 2.7854e-02
Training CC : 0.9153 | Validation CC : 0.9023
** Classification Losses **
Training Loss : 3.6805e-01 | Validation Loss : 5.9546e-01
Training F1 Macro: 0.8066 | Validation F1 Macro : 0.7074
Training F1 Micro: 0.8010 | Validation F1 Micro : 0.7000
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4775e-02 | Validation Loss : 2.7779e-02
Training CC : 0.9153 | Validation CC : 0.9025
** Classification Losses **
Training Loss : 4.3561e-01 | Validation Loss : 5.3773e-01
Training F1 Macro: 0.7719 | Validation F1 Macro : 0.7658
Training F1 Micro: 0.7708 | Validation F1 Micro : 0.7600
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4401e-02 | Validation Loss : 2.7843e-02
Training CC : 0.9161 | Validation CC : 0.9022
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2192e-01 | Validation Loss : 5.4809e-01
Training F1 Macro: 0.8259 | Validation F1 Macro : 0.7431
Training F1 Micro: 0.8376 | Validation F1 Micro : 0.7380
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4454e-02 | Validation Loss : 2.7929e-02
Training CC : 0.9159 | Validation CC : 0.9019
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4403e-01 | Validation Loss : 5.3428e-01
Training F1 Macro: 0.7793 | Validation F1 Macro : 0.7772
Training F1 Micro: 0.7732 | Validation F1 Micro : 0.7740
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4742e-02 | Validation Loss : 2.8067e-02
Training CC : 0.9152 | Validation CC : 0.9014
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9136e-01 | Validation Loss : 5.0610e-01
Training F1 Macro: 0.7871 | Validation F1 Macro : 0.7691
Training F1 Micro: 0.7865 | Validation F1 Micro : 0.7620
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4886e-02 | Validation Loss : 2.8194e-02
Training CC : 0.9146 | Validation CC : 0.9009
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5612e-01 | Validation Loss : 5.1931e-01
Training F1 Macro: 0.7761 | Validation F1 Macro : 0.7658
Training F1 Micro: 0.7747 | Validation F1 Micro : 0.7600
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5033e-02 | Validation Loss : 2.8307e-02
Training CC : 0.9141 | Validation CC : 0.9005
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.8672e-01 | Validation Loss : 4.8604e-01
Training F1 Macro: 0.8366 | Validation F1 Macro : 0.7609
Training F1 Micro: 0.8506 | Validation F1 Micro : 0.7500
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4740e-02 | Validation Loss : 2.8057e-02
Training CC : 0.9150 | Validation CC : 0.9024
** Classification Losses **
Training Loss : 3.6654e-01 | Validation Loss : 5.1846e-01
Training F1 Macro: 0.8176 | Validation F1 Macro : 0.7459
Training F1 Micro: 0.8113 | Validation F1 Micro : 0.7340
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4602e-02 | Validation Loss : 2.7742e-02
Training CC : 0.9156 | Validation CC : 0.9026
** Classification Losses **
Training Loss : 3.4383e-01 | Validation Loss : 5.3204e-01
Training F1 Macro: 0.8370 | Validation F1 Macro : 0.7348
Training F1 Micro: 0.8309 | Validation F1 Micro : 0.7280
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4253e-02 | Validation Loss : 2.7607e-02
Training CC : 0.9165 | Validation CC : 0.9034
** Classification Losses **
Training Loss : 4.7168e-01 | Validation Loss : 5.6823e-01
Training F1 Macro: 0.7500 | Validation F1 Macro : 0.7383
Training F1 Micro: 0.7404 | Validation F1 Micro : 0.7260
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4558e-02 | Validation Loss : 2.7400e-02
Training CC : 0.9164 | Validation CC : 0.9038
** Classification Losses **
Training Loss : 3.3976e-01 | Validation Loss : 5.6106e-01
Training F1 Macro: 0.8339 | Validation F1 Macro : 0.7432
Training F1 Micro: 0.8197 | Validation F1 Micro : 0.7340
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3952e-02 | Validation Loss : 2.7424e-02
Training CC : 0.9178 | Validation CC : 0.9044
** Classification Losses **
Training Loss : 3.2210e-01 | Validation Loss : 5.0968e-01
Training F1 Macro: 0.8423 | Validation F1 Macro : 0.7620
Training F1 Micro: 0.8343 | Validation F1 Micro : 0.7540
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9868e-01 | Validation Loss : 3.5887e-01
Training CC : 0.2341 | Validation CC : 0.5114
** Classification Losses **
Training Loss : 1.4959e+00 | Validation Loss : 1.5120e+00
Training F1 Macro: 0.2154 | Validation F1 Macro : 0.1523
Training F1 Micro: 0.2387 | Validation F1 Micro : 0.1720
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9894e-01 | Validation Loss : 2.1681e-01
Training CC : 0.5923 | Validation CC : 0.6760
** Classification Losses **
Training Loss : 1.5420e+00 | Validation Loss : 1.4904e+00
Training F1 Macro: 0.1430 | Validation F1 Macro : 0.1566
Training F1 Micro: 0.1549 | Validation F1 Micro : 0.1720
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.7912e-01 | Validation Loss : 1.2794e-01
Training CC : 0.7241 | Validation CC : 0.7642
** Classification Losses **
Training Loss : 1.5262e+00 | Validation Loss : 1.4808e+00
Training F1 Macro: 0.1778 | Validation F1 Macro : 0.1743
Training F1 Micro: 0.1783 | Validation F1 Micro : 0.1920
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.0543e-01 | Validation Loss : 7.7546e-02
Training CC : 0.7856 | Validation CC : 0.7990
** Classification Losses **
Training Loss : 1.5020e+00 | Validation Loss : 1.4967e+00
Training F1 Macro: 0.1304 | Validation F1 Macro : 0.1842
Training F1 Micro: 0.1478 | Validation F1 Micro : 0.1980
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.6392e-02 | Validation Loss : 5.3870e-02
Training CC : 0.8123 | Validation CC : 0.8174
** Classification Losses **
Training Loss : 1.4912e+00 | Validation Loss : 1.4790e+00
Training F1 Macro: 0.1783 | Validation F1 Macro : 0.1976
Training F1 Micro: 0.1963 | Validation F1 Micro : 0.2100
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.7524e-02 | Validation Loss : 6.7557e-02
Training CC : 0.8072 | Validation CC : 0.7616
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3672e+00 | Validation Loss : 1.1986e+00
Training F1 Macro: 0.2846 | Validation F1 Macro : 0.4786
Training F1 Micro: 0.3063 | Validation F1 Micro : 0.4860
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.6205e-02 | Validation Loss : 8.5809e-02
Training CC : 0.7305 | Validation CC : 0.6858
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0008e+00 | Validation Loss : 1.0349e+00
Training F1 Macro: 0.7001 | Validation F1 Macro : 0.6226
Training F1 Micro: 0.6974 | Validation F1 Micro : 0.6300
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 8.5313e-02 | Validation Loss : 7.9742e-02
Training CC : 0.6935 | Validation CC : 0.7114
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.4318e-01 | Validation Loss : 9.2819e-01
Training F1 Macro: 0.7171 | Validation F1 Macro : 0.6649
Training F1 Micro: 0.7267 | Validation F1 Micro : 0.6760
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.6074e-02 | Validation Loss : 7.4245e-02
Training CC : 0.7335 | Validation CC : 0.7381
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.6416e-01 | Validation Loss : 8.4691e-01
Training F1 Macro: 0.7943 | Validation F1 Macro : 0.6920
Training F1 Micro: 0.8039 | Validation F1 Micro : 0.6860
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.7031e-02 | Validation Loss : 8.4320e-02
Training CC : 0.7326 | Validation CC : 0.6946
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.8200e-01 | Validation Loss : 7.7217e-01
Training F1 Macro: 0.7788 | Validation F1 Macro : 0.7074
Training F1 Micro: 0.7856 | Validation F1 Micro : 0.7100
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.2718e-02 | Validation Loss : 5.5049e-02
Training CC : 0.7832 | Validation CC : 0.8115
** Classification Losses **
Training Loss : 6.1116e-01 | Validation Loss : 8.3978e-01
Training F1 Macro: 0.7811 | Validation F1 Macro : 0.6925
Training F1 Micro: 0.7712 | Validation F1 Micro : 0.6940
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9501e-02 | Validation Loss : 4.6884e-02
Training CC : 0.8267 | Validation CC : 0.8299
** Classification Losses **
Training Loss : 6.3505e-01 | Validation Loss : 8.2340e-01
Training F1 Macro: 0.7810 | Validation F1 Macro : 0.7192
Training F1 Micro: 0.7786 | Validation F1 Micro : 0.7180
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.5087e-02 | Validation Loss : 4.3537e-02
Training CC : 0.8395 | Validation CC : 0.8460
** Classification Losses **
Training Loss : 6.1292e-01 | Validation Loss : 8.7064e-01
Training F1 Macro: 0.8223 | Validation F1 Macro : 0.6760
Training F1 Micro: 0.8272 | Validation F1 Micro : 0.6840
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.2009e-02 | Validation Loss : 4.1736e-02
Training CC : 0.8532 | Validation CC : 0.8528
** Classification Losses **
Training Loss : 6.7075e-01 | Validation Loss : 8.3070e-01
Training F1 Macro: 0.7555 | Validation F1 Macro : 0.7007
Training F1 Micro: 0.7645 | Validation F1 Micro : 0.7000
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9915e-02 | Validation Loss : 3.9228e-02
Training CC : 0.8598 | Validation CC : 0.8595
** Classification Losses **
Training Loss : 6.6677e-01 | Validation Loss : 8.6318e-01
Training F1 Macro: 0.7830 | Validation F1 Macro : 0.6562
Training F1 Micro: 0.7799 | Validation F1 Micro : 0.6560
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9108e-02 | Validation Loss : 4.2105e-02
Training CC : 0.8618 | Validation CC : 0.8503
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.2480e-01 | Validation Loss : 7.5823e-01
Training F1 Macro: 0.7761 | Validation F1 Macro : 0.7091
Training F1 Micro: 0.7890 | Validation F1 Micro : 0.7120
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5343e-02 | Validation Loss : 5.1984e-02
Training CC : 0.8429 | Validation CC : 0.8168
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8951e-01 | Validation Loss : 6.8570e-01
Training F1 Macro: 0.8321 | Validation F1 Macro : 0.7330
Training F1 Micro: 0.8384 | Validation F1 Micro : 0.7360
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.6140e-02 | Validation Loss : 6.1337e-02
Training CC : 0.8052 | Validation CC : 0.7810
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2696e-01 | Validation Loss : 6.3935e-01
Training F1 Macro: 0.8316 | Validation F1 Macro : 0.7501
Training F1 Micro: 0.8233 | Validation F1 Micro : 0.7520
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.3185e-02 | Validation Loss : 6.5445e-02
Training CC : 0.7778 | Validation CC : 0.7651
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8988e-01 | Validation Loss : 6.4374e-01
Training F1 Macro: 0.8460 | Validation F1 Macro : 0.7107
Training F1 Micro: 0.8530 | Validation F1 Micro : 0.7160
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 6.5909e-02 | Validation Loss : 6.6261e-02
Training CC : 0.7682 | Validation CC : 0.7636
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5947e-01 | Validation Loss : 6.3093e-01
Training F1 Macro: 0.7687 | Validation F1 Macro : 0.7291
Training F1 Micro: 0.7694 | Validation F1 Micro : 0.7300
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.9984e-02 | Validation Loss : 4.2499e-02
Training CC : 0.8219 | Validation CC : 0.8537
** Classification Losses **
Training Loss : 4.0114e-01 | Validation Loss : 6.5621e-01
Training F1 Macro: 0.8202 | Validation F1 Macro : 0.7205
Training F1 Micro: 0.8288 | Validation F1 Micro : 0.7180
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.2735e-02 | Validation Loss : 4.1410e-02
Training CC : 0.8569 | Validation CC : 0.8545
** Classification Losses **
Training Loss : 5.0873e-01 | Validation Loss : 6.5917e-01
Training F1 Macro: 0.7505 | Validation F1 Macro : 0.7221
Training F1 Micro: 0.7652 | Validation F1 Micro : 0.7200
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9162e-02 | Validation Loss : 3.8922e-02
Training CC : 0.8627 | Validation CC : 0.8607
** Classification Losses **
Training Loss : 4.6297e-01 | Validation Loss : 6.4654e-01
Training F1 Macro: 0.7912 | Validation F1 Macro : 0.7389
Training F1 Micro: 0.7895 | Validation F1 Micro : 0.7400
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.7997e-02 | Validation Loss : 3.6831e-02
Training CC : 0.8677 | Validation CC : 0.8684
** Classification Losses **
Training Loss : 4.6415e-01 | Validation Loss : 6.9991e-01
Training F1 Macro: 0.7792 | Validation F1 Macro : 0.6991
Training F1 Micro: 0.7968 | Validation F1 Micro : 0.7020
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5807e-02 | Validation Loss : 3.5887e-02
Training CC : 0.8743 | Validation CC : 0.8722
** Classification Losses **
Training Loss : 4.2755e-01 | Validation Loss : 6.7651e-01
Training F1 Macro: 0.8212 | Validation F1 Macro : 0.7310
Training F1 Micro: 0.8231 | Validation F1 Micro : 0.7300
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5257e-02 | Validation Loss : 3.5864e-02
Training CC : 0.8762 | Validation CC : 0.8718
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8743e-01 | Validation Loss : 6.2164e-01
Training F1 Macro: 0.7657 | Validation F1 Macro : 0.7424
Training F1 Micro: 0.7802 | Validation F1 Micro : 0.7400
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5719e-02 | Validation Loss : 3.6366e-02
Training CC : 0.8746 | Validation CC : 0.8699
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4860e-01 | Validation Loss : 6.0388e-01
Training F1 Macro: 0.8592 | Validation F1 Macro : 0.7506
Training F1 Micro: 0.8557 | Validation F1 Micro : 0.7500
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6532e-02 | Validation Loss : 3.7169e-02
Training CC : 0.8721 | Validation CC : 0.8671
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.2055e-01 | Validation Loss : 5.9445e-01
Training F1 Macro: 0.7539 | Validation F1 Macro : 0.7273
Training F1 Micro: 0.7464 | Validation F1 Micro : 0.7340
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6998e-02 | Validation Loss : 3.8012e-02
Training CC : 0.8699 | Validation CC : 0.8641
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5879e-01 | Validation Loss : 5.9490e-01
Training F1 Macro: 0.7653 | Validation F1 Macro : 0.7406
Training F1 Micro: 0.7656 | Validation F1 Micro : 0.7400
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.8020e-02 | Validation Loss : 3.8724e-02
Training CC : 0.8667 | Validation CC : 0.8618
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7517e-01 | Validation Loss : 5.6518e-01
Training F1 Macro: 0.8356 | Validation F1 Macro : 0.7430
Training F1 Micro: 0.8350 | Validation F1 Micro : 0.7420
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6804e-02 | Validation Loss : 3.5862e-02
Training CC : 0.8708 | Validation CC : 0.8725
** Classification Losses **
Training Loss : 3.9547e-01 | Validation Loss : 5.7111e-01
Training F1 Macro: 0.8014 | Validation F1 Macro : 0.7367
Training F1 Micro: 0.8050 | Validation F1 Micro : 0.7420
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4710e-02 | Validation Loss : 3.5063e-02
Training CC : 0.8784 | Validation CC : 0.8760
** Classification Losses **
Training Loss : 3.0799e-01 | Validation Loss : 5.7857e-01
Training F1 Macro: 0.8529 | Validation F1 Macro : 0.7426
Training F1 Micro: 0.8580 | Validation F1 Micro : 0.7440
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3640e-02 | Validation Loss : 3.4017e-02
Training CC : 0.8822 | Validation CC : 0.8793
** Classification Losses **
Training Loss : 4.4996e-01 | Validation Loss : 5.7966e-01
Training F1 Macro: 0.7333 | Validation F1 Macro : 0.7425
Training F1 Micro: 0.7511 | Validation F1 Micro : 0.7420
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2802e-02 | Validation Loss : 3.3093e-02
Training CC : 0.8850 | Validation CC : 0.8824
** Classification Losses **
Training Loss : 3.8231e-01 | Validation Loss : 6.1486e-01
Training F1 Macro: 0.8022 | Validation F1 Macro : 0.7124
Training F1 Micro: 0.8067 | Validation F1 Micro : 0.7140
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2528e-02 | Validation Loss : 3.2509e-02
Training CC : 0.8871 | Validation CC : 0.8846
** Classification Losses **
Training Loss : 3.6199e-01 | Validation Loss : 6.1090e-01
Training F1 Macro: 0.8441 | Validation F1 Macro : 0.7329
Training F1 Micro: 0.8395 | Validation F1 Micro : 0.7340
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1731e-02 | Validation Loss : 3.2591e-02
Training CC : 0.8891 | Validation CC : 0.8842
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1988e-01 | Validation Loss : 5.7798e-01
Training F1 Macro: 0.8403 | Validation F1 Macro : 0.7386
Training F1 Micro: 0.8425 | Validation F1 Micro : 0.7360
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2021e-02 | Validation Loss : 3.2921e-02
Training CC : 0.8883 | Validation CC : 0.8832
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2939e-01 | Validation Loss : 5.5297e-01
Training F1 Macro: 0.7682 | Validation F1 Macro : 0.7520
Training F1 Micro: 0.7672 | Validation F1 Micro : 0.7500
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2534e-02 | Validation Loss : 3.3285e-02
Training CC : 0.8870 | Validation CC : 0.8821
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4085e-01 | Validation Loss : 5.4931e-01
Training F1 Macro: 0.7781 | Validation F1 Macro : 0.7349
Training F1 Micro: 0.7799 | Validation F1 Micro : 0.7340
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2680e-02 | Validation Loss : 3.3525e-02
Training CC : 0.8864 | Validation CC : 0.8814
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5246e-01 | Validation Loss : 5.5338e-01
Training F1 Macro: 0.8065 | Validation F1 Macro : 0.7403
Training F1 Micro: 0.8026 | Validation F1 Micro : 0.7380
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2905e-02 | Validation Loss : 3.3663e-02
Training CC : 0.8857 | Validation CC : 0.8809
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.1217e-01 | Validation Loss : 5.7205e-01
Training F1 Macro: 0.6954 | Validation F1 Macro : 0.7394
Training F1 Micro: 0.6889 | Validation F1 Micro : 0.7320
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2292e-02 | Validation Loss : 3.2444e-02
Training CC : 0.8874 | Validation CC : 0.8849
** Classification Losses **
Training Loss : 4.5104e-01 | Validation Loss : 5.7489e-01
Training F1 Macro: 0.7818 | Validation F1 Macro : 0.7520
Training F1 Micro: 0.7745 | Validation F1 Micro : 0.7420
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1402e-02 | Validation Loss : 3.2005e-02
Training CC : 0.8904 | Validation CC : 0.8869
** Classification Losses **
Training Loss : 4.7377e-01 | Validation Loss : 6.0845e-01
Training F1 Macro: 0.7893 | Validation F1 Macro : 0.7290
Training F1 Micro: 0.7807 | Validation F1 Micro : 0.7280
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0853e-02 | Validation Loss : 3.1329e-02
Training CC : 0.8924 | Validation CC : 0.8890
** Classification Losses **
Training Loss : 4.2474e-01 | Validation Loss : 5.8508e-01
Training F1 Macro: 0.7916 | Validation F1 Macro : 0.7346
Training F1 Micro: 0.7944 | Validation F1 Micro : 0.7320
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0168e-02 | Validation Loss : 3.0800e-02
Training CC : 0.8947 | Validation CC : 0.8910
** Classification Losses **
Training Loss : 4.5395e-01 | Validation Loss : 6.1173e-01
Training F1 Macro: 0.7866 | Validation F1 Macro : 0.7230
Training F1 Micro: 0.7772 | Validation F1 Micro : 0.7180
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9666e-02 | Validation Loss : 3.0416e-02
Training CC : 0.8965 | Validation CC : 0.8925
** Classification Losses **
Training Loss : 3.1989e-01 | Validation Loss : 6.3724e-01
Training F1 Macro: 0.8191 | Validation F1 Macro : 0.7139
Training F1 Micro: 0.8566 | Validation F1 Micro : 0.7080
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9468e-02 | Validation Loss : 3.0454e-02
Training CC : 0.8973 | Validation CC : 0.8923
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9924e-01 | Validation Loss : 6.1870e-01
Training F1 Macro: 0.8160 | Validation F1 Macro : 0.7138
Training F1 Micro: 0.8212 | Validation F1 Micro : 0.7080
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9873e-02 | Validation Loss : 3.0533e-02
Training CC : 0.8965 | Validation CC : 0.8920
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9818e-01 | Validation Loss : 5.7746e-01
Training F1 Macro: 0.8172 | Validation F1 Macro : 0.7504
Training F1 Micro: 0.8250 | Validation F1 Micro : 0.7420
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9703e-02 | Validation Loss : 3.0694e-02
Training CC : 0.8966 | Validation CC : 0.8914
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9220e-01 | Validation Loss : 5.7774e-01
Training F1 Macro: 0.8002 | Validation F1 Macro : 0.7335
Training F1 Micro: 0.8072 | Validation F1 Micro : 0.7280
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9876e-02 | Validation Loss : 3.0862e-02
Training CC : 0.8960 | Validation CC : 0.8908
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8278e-01 | Validation Loss : 6.0605e-01
Training F1 Macro: 0.8269 | Validation F1 Macro : 0.7229
Training F1 Micro: 0.8282 | Validation F1 Micro : 0.7140
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0345e-02 | Validation Loss : 3.1008e-02
Training CC : 0.8949 | Validation CC : 0.8903
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9609e-01 | Validation Loss : 5.9640e-01
Training F1 Macro: 0.8165 | Validation F1 Macro : 0.7227
Training F1 Micro: 0.8105 | Validation F1 Micro : 0.7160
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9808e-02 | Validation Loss : 3.0377e-02
Training CC : 0.8964 | Validation CC : 0.8927
** Classification Losses **
Training Loss : 3.0554e-01 | Validation Loss : 5.7922e-01
Training F1 Macro: 0.8548 | Validation F1 Macro : 0.7340
Training F1 Micro: 0.8500 | Validation F1 Micro : 0.7240
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9369e-02 | Validation Loss : 2.9996e-02
Training CC : 0.8982 | Validation CC : 0.8941
** Classification Losses **
Training Loss : 4.2997e-01 | Validation Loss : 5.9309e-01
Training F1 Macro: 0.8005 | Validation F1 Macro : 0.7159
Training F1 Micro: 0.7903 | Validation F1 Micro : 0.7100
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8731e-02 | Validation Loss : 2.9536e-02
Training CC : 0.9000 | Validation CC : 0.8958
** Classification Losses **
Training Loss : 3.9716e-01 | Validation Loss : 6.1202e-01
Training F1 Macro: 0.8138 | Validation F1 Macro : 0.7350
Training F1 Micro: 0.8132 | Validation F1 Micro : 0.7300
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8349e-02 | Validation Loss : 2.9123e-02
Training CC : 0.9016 | Validation CC : 0.8973
** Classification Losses **
Training Loss : 4.1883e-01 | Validation Loss : 6.1556e-01
Training F1 Macro: 0.7854 | Validation F1 Macro : 0.7442
Training F1 Micro: 0.7831 | Validation F1 Micro : 0.7360
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8070e-02 | Validation Loss : 2.8861e-02
Training CC : 0.9028 | Validation CC : 0.8984
** Classification Losses **
Training Loss : 4.5925e-01 | Validation Loss : 5.7806e-01
Training F1 Macro: 0.7894 | Validation F1 Macro : 0.7415
Training F1 Micro: 0.7886 | Validation F1 Micro : 0.7400
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7748e-02 | Validation Loss : 2.8881e-02
Training CC : 0.9037 | Validation CC : 0.8982
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4974e-01 | Validation Loss : 5.9102e-01
Training F1 Macro: 0.8305 | Validation F1 Macro : 0.7206
Training F1 Micro: 0.8291 | Validation F1 Micro : 0.7240
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7972e-02 | Validation Loss : 2.8964e-02
Training CC : 0.9032 | Validation CC : 0.8979
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7413e-01 | Validation Loss : 6.1996e-01
Training F1 Macro: 0.8261 | Validation F1 Macro : 0.7375
Training F1 Micro: 0.8127 | Validation F1 Micro : 0.7300
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7918e-02 | Validation Loss : 2.9090e-02
Training CC : 0.9031 | Validation CC : 0.8974
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8635e-01 | Validation Loss : 5.4232e-01
Training F1 Macro: 0.8161 | Validation F1 Macro : 0.7701
Training F1 Micro: 0.8171 | Validation F1 Micro : 0.7680
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8259e-02 | Validation Loss : 2.9237e-02
Training CC : 0.9022 | Validation CC : 0.8969
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2958e-01 | Validation Loss : 5.9202e-01
Training F1 Macro: 0.8497 | Validation F1 Macro : 0.7324
Training F1 Micro: 0.8476 | Validation F1 Micro : 0.7280
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8223e-02 | Validation Loss : 2.9382e-02
Training CC : 0.9021 | Validation CC : 0.8964
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6315e-01 | Validation Loss : 5.8186e-01
Training F1 Macro: 0.8234 | Validation F1 Macro : 0.7367
Training F1 Micro: 0.8231 | Validation F1 Micro : 0.7320
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8134e-02 | Validation Loss : 2.8963e-02
Training CC : 0.9027 | Validation CC : 0.8982
** Classification Losses **
Training Loss : 3.8270e-01 | Validation Loss : 5.8828e-01
Training F1 Macro: 0.7979 | Validation F1 Macro : 0.7441
Training F1 Micro: 0.7924 | Validation F1 Micro : 0.7420
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7525e-02 | Validation Loss : 2.8481e-02
Training CC : 0.9044 | Validation CC : 0.8998
** Classification Losses **
Training Loss : 3.2745e-01 | Validation Loss : 5.9597e-01
Training F1 Macro: 0.8151 | Validation F1 Macro : 0.7432
Training F1 Micro: 0.8204 | Validation F1 Micro : 0.7400
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7157e-02 | Validation Loss : 2.8134e-02
Training CC : 0.9058 | Validation CC : 0.9010
** Classification Losses **
Training Loss : 3.3307e-01 | Validation Loss : 5.7624e-01
Training F1 Macro: 0.8549 | Validation F1 Macro : 0.7504
Training F1 Micro: 0.8509 | Validation F1 Micro : 0.7460
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7042e-02 | Validation Loss : 2.7922e-02
Training CC : 0.9067 | Validation CC : 0.9019
** Classification Losses **
Training Loss : 3.6568e-01 | Validation Loss : 6.0325e-01
Training F1 Macro: 0.8200 | Validation F1 Macro : 0.7256
Training F1 Micro: 0.8160 | Validation F1 Micro : 0.7200
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6641e-02 | Validation Loss : 2.7683e-02
Training CC : 0.9078 | Validation CC : 0.9027
** Classification Losses **
Training Loss : 3.4737e-01 | Validation Loss : 6.1237e-01
Training F1 Macro: 0.8691 | Validation F1 Macro : 0.7248
Training F1 Micro: 0.8648 | Validation F1 Micro : 0.7260
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6457e-02 | Validation Loss : 2.7735e-02
Training CC : 0.9085 | Validation CC : 0.9025
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6110e-01 | Validation Loss : 5.7326e-01
Training F1 Macro: 0.8376 | Validation F1 Macro : 0.7431
Training F1 Micro: 0.8312 | Validation F1 Micro : 0.7400
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6560e-02 | Validation Loss : 2.7842e-02
Training CC : 0.9081 | Validation CC : 0.9021
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8576e-01 | Validation Loss : 6.0416e-01
Training F1 Macro: 0.8283 | Validation F1 Macro : 0.7324
Training F1 Micro: 0.8185 | Validation F1 Micro : 0.7280
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6702e-02 | Validation Loss : 2.7966e-02
Training CC : 0.9078 | Validation CC : 0.9017
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1421e-01 | Validation Loss : 5.9153e-01
Training F1 Macro: 0.7855 | Validation F1 Macro : 0.7291
Training F1 Micro: 0.7785 | Validation F1 Micro : 0.7240
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6775e-02 | Validation Loss : 2.8038e-02
Training CC : 0.9075 | Validation CC : 0.9014
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2271e-01 | Validation Loss : 6.3040e-01
Training F1 Macro: 0.7825 | Validation F1 Macro : 0.7266
Training F1 Micro: 0.7767 | Validation F1 Micro : 0.7200
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6883e-02 | Validation Loss : 2.8090e-02
Training CC : 0.9072 | Validation CC : 0.9012
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.0265e-01 | Validation Loss : 5.4581e-01
Training F1 Macro: 0.8296 | Validation F1 Macro : 0.7564
Training F1 Micro: 0.8238 | Validation F1 Micro : 0.7500
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7095e-02 | Validation Loss : 2.7683e-02
Training CC : 0.9072 | Validation CC : 0.9028
** Classification Losses **
Training Loss : 4.2116e-01 | Validation Loss : 6.0582e-01
Training F1 Macro: 0.7980 | Validation F1 Macro : 0.7243
Training F1 Micro: 0.7944 | Validation F1 Micro : 0.7180
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6216e-02 | Validation Loss : 2.7391e-02
Training CC : 0.9093 | Validation CC : 0.9039
** Classification Losses **
Training Loss : 4.6899e-01 | Validation Loss : 6.3304e-01
Training F1 Macro: 0.7785 | Validation F1 Macro : 0.6945
Training F1 Micro: 0.7825 | Validation F1 Micro : 0.6920
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6014e-02 | Validation Loss : 2.7109e-02
Training CC : 0.9102 | Validation CC : 0.9048
** Classification Losses **
Training Loss : 3.8121e-01 | Validation Loss : 5.9040e-01
Training F1 Macro: 0.8145 | Validation F1 Macro : 0.7365
Training F1 Micro: 0.8025 | Validation F1 Micro : 0.7320
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5632e-02 | Validation Loss : 2.6951e-02
Training CC : 0.9113 | Validation CC : 0.9055
** Classification Losses **
Training Loss : 4.0933e-01 | Validation Loss : 6.5076e-01
Training F1 Macro: 0.8071 | Validation F1 Macro : 0.7079
Training F1 Micro: 0.8157 | Validation F1 Micro : 0.7040
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5521e-02 | Validation Loss : 2.6710e-02
Training CC : 0.9119 | Validation CC : 0.9063
** Classification Losses **
Training Loss : 3.4909e-01 | Validation Loss : 6.3893e-01
Training F1 Macro: 0.8238 | Validation F1 Macro : 0.7127
Training F1 Micro: 0.8147 | Validation F1 Micro : 0.7100
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5418e-02 | Validation Loss : 2.6754e-02
Training CC : 0.9124 | Validation CC : 0.9061
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.8644e-01 | Validation Loss : 6.7240e-01
Training F1 Macro: 0.8745 | Validation F1 Macro : 0.6953
Training F1 Micro: 0.8670 | Validation F1 Micro : 0.7000
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5637e-02 | Validation Loss : 2.6845e-02
Training CC : 0.9119 | Validation CC : 0.9058
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5535e-01 | Validation Loss : 5.7341e-01
Training F1 Macro: 0.7957 | Validation F1 Macro : 0.7283
Training F1 Micro: 0.7828 | Validation F1 Micro : 0.7240
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5411e-02 | Validation Loss : 2.6972e-02
Training CC : 0.9122 | Validation CC : 0.9054
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2847e-01 | Validation Loss : 6.2173e-01
Training F1 Macro: 0.8364 | Validation F1 Macro : 0.7026
Training F1 Micro: 0.8312 | Validation F1 Micro : 0.6960
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5793e-02 | Validation Loss : 2.7089e-02
Training CC : 0.9114 | Validation CC : 0.9050
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8775e-01 | Validation Loss : 5.9186e-01
Training F1 Macro: 0.8210 | Validation F1 Macro : 0.7234
Training F1 Micro: 0.8104 | Validation F1 Micro : 0.7200
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5783e-02 | Validation Loss : 2.7190e-02
Training CC : 0.9113 | Validation CC : 0.9047
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2023e-01 | Validation Loss : 6.0627e-01
Training F1 Macro: 0.8513 | Validation F1 Macro : 0.7140
Training F1 Micro: 0.8456 | Validation F1 Micro : 0.7080
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5654e-02 | Validation Loss : 2.6900e-02
Training CC : 0.9115 | Validation CC : 0.9060
** Classification Losses **
Training Loss : 4.4896e-01 | Validation Loss : 5.9745e-01
Training F1 Macro: 0.7801 | Validation F1 Macro : 0.7064
Training F1 Micro: 0.7805 | Validation F1 Micro : 0.7020
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5561e-02 | Validation Loss : 2.6615e-02
Training CC : 0.9123 | Validation CC : 0.9068
** Classification Losses **
Training Loss : 3.8235e-01 | Validation Loss : 6.4174e-01
Training F1 Macro: 0.8059 | Validation F1 Macro : 0.7100
Training F1 Micro: 0.8032 | Validation F1 Micro : 0.7060
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5270e-02 | Validation Loss : 2.6406e-02
Training CC : 0.9132 | Validation CC : 0.9076
** Classification Losses **
Training Loss : 3.2552e-01 | Validation Loss : 5.8625e-01
Training F1 Macro: 0.8176 | Validation F1 Macro : 0.7502
Training F1 Micro: 0.8166 | Validation F1 Micro : 0.7440
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5012e-02 | Validation Loss : 2.6217e-02
Training CC : 0.9141 | Validation CC : 0.9082
** Classification Losses **
Training Loss : 3.8341e-01 | Validation Loss : 6.7951e-01
Training F1 Macro: 0.8214 | Validation F1 Macro : 0.6797
Training F1 Micro: 0.8205 | Validation F1 Micro : 0.6800
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4931e-02 | Validation Loss : 2.6142e-02
Training CC : 0.9145 | Validation CC : 0.9084
** Classification Losses **
Training Loss : 3.3366e-01 | Validation Loss : 6.8486e-01
Training F1 Macro: 0.8168 | Validation F1 Macro : 0.6861
Training F1 Micro: 0.8096 | Validation F1 Micro : 0.6820
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4637e-02 | Validation Loss : 2.6184e-02
Training CC : 0.9152 | Validation CC : 0.9083
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0799e-01 | Validation Loss : 6.3986e-01
Training F1 Macro: 0.8172 | Validation F1 Macro : 0.7065
Training F1 Micro: 0.8096 | Validation F1 Micro : 0.7040
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4948e-02 | Validation Loss : 2.6264e-02
Training CC : 0.9145 | Validation CC : 0.9080
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9592e-01 | Validation Loss : 5.7108e-01
Training F1 Macro: 0.8010 | Validation F1 Macro : 0.7551
Training F1 Micro: 0.8010 | Validation F1 Micro : 0.7540
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4814e-02 | Validation Loss : 2.6370e-02
Training CC : 0.9147 | Validation CC : 0.9077
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0328e-01 | Validation Loss : 6.0726e-01
Training F1 Macro: 0.8123 | Validation F1 Macro : 0.7450
Training F1 Micro: 0.8053 | Validation F1 Micro : 0.7440
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4866e-02 | Validation Loss : 2.6456e-02
Training CC : 0.9145 | Validation CC : 0.9074
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3993e-01 | Validation Loss : 5.8318e-01
Training F1 Macro: 0.8169 | Validation F1 Macro : 0.7283
Training F1 Micro: 0.8097 | Validation F1 Micro : 0.7240
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4968e-02 | Validation Loss : 2.6555e-02
Training CC : 0.9142 | Validation CC : 0.9071
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9412e-01 | Validation Loss : 5.9819e-01
Training F1 Macro: 0.8057 | Validation F1 Macro : 0.7429
Training F1 Micro: 0.8102 | Validation F1 Micro : 0.7360
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5135e-02 | Validation Loss : 2.6330e-02
Training CC : 0.9140 | Validation CC : 0.9082
** Classification Losses **
Training Loss : 3.8532e-01 | Validation Loss : 6.4709e-01
Training F1 Macro: 0.8100 | Validation F1 Macro : 0.7178
Training F1 Micro: 0.7963 | Validation F1 Micro : 0.7140
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4815e-02 | Validation Loss : 2.5968e-02
Training CC : 0.9149 | Validation CC : 0.9092
** Classification Losses **
Training Loss : 4.0444e-01 | Validation Loss : 6.4331e-01
Training F1 Macro: 0.7866 | Validation F1 Macro : 0.7005
Training F1 Micro: 0.7942 | Validation F1 Micro : 0.7000
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4463e-02 | Validation Loss : 2.5855e-02
Training CC : 0.9159 | Validation CC : 0.9094
** Classification Losses **
Training Loss : 3.5836e-01 | Validation Loss : 6.5113e-01
Training F1 Macro: 0.8281 | Validation F1 Macro : 0.6919
Training F1 Micro: 0.8166 | Validation F1 Micro : 0.6880
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4252e-02 | Validation Loss : 2.5652e-02
Training CC : 0.9167 | Validation CC : 0.9103
** Classification Losses **
Training Loss : 3.8699e-01 | Validation Loss : 6.5505e-01
Training F1 Macro: 0.8033 | Validation F1 Macro : 0.7220
Training F1 Micro: 0.8003 | Validation F1 Micro : 0.7180
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3948e-02 | Validation Loss : 2.5464e-02
Training CC : 0.9176 | Validation CC : 0.9109
** Classification Losses **
Training Loss : 4.2383e-01 | Validation Loss : 6.3304e-01
Training F1 Macro: 0.7831 | Validation F1 Macro : 0.7032
Training F1 Micro: 0.7809 | Validation F1 Micro : 0.6980
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3777e-02 | Validation Loss : 2.5514e-02
Training CC : 0.9181 | Validation CC : 0.9107
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5963e-01 | Validation Loss : 6.3910e-01
Training F1 Macro: 0.8332 | Validation F1 Macro : 0.7240
Training F1 Micro: 0.8292 | Validation F1 Micro : 0.7220
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3883e-02 | Validation Loss : 2.5596e-02
Training CC : 0.9178 | Validation CC : 0.9104
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2357e-01 | Validation Loss : 6.3196e-01
Training F1 Macro: 0.8255 | Validation F1 Macro : 0.7027
Training F1 Micro: 0.8328 | Validation F1 Micro : 0.7000
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3912e-02 | Validation Loss : 2.5711e-02
Training CC : 0.9176 | Validation CC : 0.9099
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6318e-01 | Validation Loss : 5.9072e-01
Training F1 Macro: 0.8463 | Validation F1 Macro : 0.7382
Training F1 Micro: 0.8444 | Validation F1 Micro : 0.7400
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4448e-02 | Validation Loss : 2.5826e-02
Training CC : 0.9164 | Validation CC : 0.9095
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3081e-01 | Validation Loss : 6.4249e-01
Training F1 Macro: 0.7806 | Validation F1 Macro : 0.7023
Training F1 Micro: 0.7865 | Validation F1 Micro : 0.7020
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4183e-02 | Validation Loss : 2.5894e-02
Training CC : 0.9168 | Validation CC : 0.9093
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2051e-01 | Validation Loss : 6.2725e-01
Training F1 Macro: 0.7680 | Validation F1 Macro : 0.7261
Training F1 Micro: 0.7679 | Validation F1 Micro : 0.7240
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3999e-02 | Validation Loss : 2.5732e-02
Training CC : 0.9172 | Validation CC : 0.9102
** Classification Losses **
Training Loss : 3.8906e-01 | Validation Loss : 5.7184e-01
Training F1 Macro: 0.8116 | Validation F1 Macro : 0.7554
Training F1 Micro: 0.8181 | Validation F1 Micro : 0.7560
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3810e-02 | Validation Loss : 2.5449e-02
Training CC : 0.9181 | Validation CC : 0.9111
** Classification Losses **
Training Loss : 3.5699e-01 | Validation Loss : 5.8052e-01
Training F1 Macro: 0.8363 | Validation F1 Macro : 0.7347
Training F1 Micro: 0.8276 | Validation F1 Micro : 0.7360
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3661e-02 | Validation Loss : 2.5303e-02
Training CC : 0.9187 | Validation CC : 0.9116
** Classification Losses **
Training Loss : 3.3813e-01 | Validation Loss : 6.2909e-01
Training F1 Macro: 0.8302 | Validation F1 Macro : 0.7090
Training F1 Micro: 0.8358 | Validation F1 Micro : 0.7120
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3629e-02 | Validation Loss : 2.5147e-02
Training CC : 0.9190 | Validation CC : 0.9121
** Classification Losses **
Training Loss : 3.9604e-01 | Validation Loss : 6.1856e-01
Training F1 Macro: 0.8242 | Validation F1 Macro : 0.7416
Training F1 Micro: 0.8260 | Validation F1 Micro : 0.7400
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3805e-02 | Validation Loss : 2.5144e-02
Training CC : 0.9190 | Validation CC : 0.9122
** Classification Losses **
Training Loss : 4.0560e-01 | Validation Loss : 6.0625e-01
Training F1 Macro: 0.8317 | Validation F1 Macro : 0.7216
Training F1 Micro: 0.8296 | Validation F1 Micro : 0.7180
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3386e-02 | Validation Loss : 2.5161e-02
Training CC : 0.9198 | Validation CC : 0.9121
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4908e-01 | Validation Loss : 6.2831e-01
Training F1 Macro: 0.8028 | Validation F1 Macro : 0.6992
Training F1 Micro: 0.8104 | Validation F1 Micro : 0.6920
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3831e-02 | Validation Loss : 2.5254e-02
Training CC : 0.9188 | Validation CC : 0.9117
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4084e-01 | Validation Loss : 6.0076e-01
Training F1 Macro: 0.8521 | Validation F1 Macro : 0.7097
Training F1 Micro: 0.8547 | Validation F1 Micro : 0.7060
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3483e-02 | Validation Loss : 2.5343e-02
Training CC : 0.9192 | Validation CC : 0.9113
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6182e-01 | Validation Loss : 5.5996e-01
Training F1 Macro: 0.8119 | Validation F1 Macro : 0.7594
Training F1 Micro: 0.8033 | Validation F1 Micro : 0.7540
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3507e-02 | Validation Loss : 2.5435e-02
Training CC : 0.9191 | Validation CC : 0.9110
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0367e-01 | Validation Loss : 5.8848e-01
Training F1 Macro: 0.7892 | Validation F1 Macro : 0.7306
Training F1 Micro: 0.7793 | Validation F1 Micro : 0.7280
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3707e-02 | Validation Loss : 2.5606e-02
Training CC : 0.9184 | Validation CC : 0.9103
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4115e-01 | Validation Loss : 6.6638e-01
Training F1 Macro: 0.8263 | Validation F1 Macro : 0.6798
Training F1 Micro: 0.8214 | Validation F1 Micro : 0.6800
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4172e-02 | Validation Loss : 2.5574e-02
Training CC : 0.9178 | Validation CC : 0.9108
** Classification Losses **
Training Loss : 3.4503e-01 | Validation Loss : 6.7844e-01
Training F1 Macro: 0.8288 | Validation F1 Macro : 0.6904
Training F1 Micro: 0.8246 | Validation F1 Micro : 0.6860
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3621e-02 | Validation Loss : 2.5282e-02
Training CC : 0.9191 | Validation CC : 0.9115
** Classification Losses **
Training Loss : 3.8141e-01 | Validation Loss : 6.8443e-01
Training F1 Macro: 0.8245 | Validation F1 Macro : 0.6990
Training F1 Micro: 0.8246 | Validation F1 Micro : 0.6960
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3532e-02 | Validation Loss : 2.5294e-02
Training CC : 0.9195 | Validation CC : 0.9114
** Classification Losses **
Training Loss : 3.2230e-01 | Validation Loss : 6.6958e-01
Training F1 Macro: 0.8342 | Validation F1 Macro : 0.6729
Training F1 Micro: 0.8403 | Validation F1 Micro : 0.6700
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3327e-02 | Validation Loss : 2.4932e-02
Training CC : 0.9201 | Validation CC : 0.9128
** Classification Losses **
Training Loss : 3.4403e-01 | Validation Loss : 6.7061e-01
Training F1 Macro: 0.8664 | Validation F1 Macro : 0.6951
Training F1 Micro: 0.8643 | Validation F1 Micro : 0.6940
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3368e-02 | Validation Loss : 2.4757e-02
Training CC : 0.9204 | Validation CC : 0.9135
** Classification Losses **
Training Loss : 4.0655e-01 | Validation Loss : 6.7934e-01
Training F1 Macro: 0.7825 | Validation F1 Macro : 0.7013
Training F1 Micro: 0.7857 | Validation F1 Micro : 0.6980
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2957e-02 | Validation Loss : 2.4792e-02
Training CC : 0.9214 | Validation CC : 0.9133
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1334e-01 | Validation Loss : 6.6190e-01
Training F1 Macro: 0.8001 | Validation F1 Macro : 0.6937
Training F1 Micro: 0.8065 | Validation F1 Micro : 0.6920
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2906e-02 | Validation Loss : 2.4879e-02
Training CC : 0.9214 | Validation CC : 0.9130
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0890e-01 | Validation Loss : 6.7813e-01
Training F1 Macro: 0.7984 | Validation F1 Macro : 0.6826
Training F1 Micro: 0.7924 | Validation F1 Micro : 0.6840
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2951e-02 | Validation Loss : 2.4984e-02
Training CC : 0.9212 | Validation CC : 0.9126
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1233e-01 | Validation Loss : 6.1681e-01
Training F1 Macro: 0.7839 | Validation F1 Macro : 0.7408
Training F1 Micro: 0.7821 | Validation F1 Micro : 0.7380
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3024e-02 | Validation Loss : 2.5076e-02
Training CC : 0.9210 | Validation CC : 0.9123
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1372e-01 | Validation Loss : 5.8999e-01
Training F1 Macro: 0.7401 | Validation F1 Macro : 0.7539
Training F1 Micro: 0.7633 | Validation F1 Micro : 0.7540
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3047e-02 | Validation Loss : 2.5138e-02
Training CC : 0.9209 | Validation CC : 0.9121
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9155e-01 | Validation Loss : 5.7479e-01
Training F1 Macro: 0.7836 | Validation F1 Macro : 0.7484
Training F1 Micro: 0.7922 | Validation F1 Micro : 0.7440
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3143e-02 | Validation Loss : 2.4991e-02
Training CC : 0.9206 | Validation CC : 0.9130
** Classification Losses **
Training Loss : 4.3899e-01 | Validation Loss : 6.0927e-01
Training F1 Macro: 0.8040 | Validation F1 Macro : 0.7393
Training F1 Micro: 0.7980 | Validation F1 Micro : 0.7320
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2835e-02 | Validation Loss : 2.4785e-02
Training CC : 0.9216 | Validation CC : 0.9134
** Classification Losses **
Training Loss : 4.2080e-01 | Validation Loss : 6.0625e-01
Training F1 Macro: 0.7781 | Validation F1 Macro : 0.7231
Training F1 Micro: 0.7855 | Validation F1 Micro : 0.7240
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2813e-02 | Validation Loss : 2.4708e-02
Training CC : 0.9220 | Validation CC : 0.9138
** Classification Losses **
Training Loss : 3.1488e-01 | Validation Loss : 6.7825e-01
Training F1 Macro: 0.8489 | Validation F1 Macro : 0.7128
Training F1 Micro: 0.8367 | Validation F1 Micro : 0.7040
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3053e-02 | Validation Loss : 2.4618e-02
Training CC : 0.9218 | Validation CC : 0.9142
** Classification Losses **
Training Loss : 3.7094e-01 | Validation Loss : 6.8310e-01
Training F1 Macro: 0.8405 | Validation F1 Macro : 0.6952
Training F1 Micro: 0.8412 | Validation F1 Micro : 0.6940
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2541e-02 | Validation Loss : 2.4700e-02
Training CC : 0.9228 | Validation CC : 0.9138
** Classification Losses **
Training Loss : 3.8761e-01 | Validation Loss : 6.4848e-01
Training F1 Macro: 0.8053 | Validation F1 Macro : 0.6973
Training F1 Micro: 0.8060 | Validation F1 Micro : 0.6900
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4319e-01 | Validation Loss : 2.7237e-01
Training CC : 0.2067 | Validation CC : 0.3921
** Classification Losses **
Training Loss : 1.3693e+00 | Validation Loss : 1.3519e+00
Training F1 Macro: 0.2617 | Validation F1 Macro : 0.2446
Training F1 Micro: 0.3244 | Validation F1 Micro : 0.3400
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4416e-01 | Validation Loss : 2.0416e-01
Training CC : 0.4643 | Validation CC : 0.5566
** Classification Losses **
Training Loss : 1.3419e+00 | Validation Loss : 1.3186e+00
Training F1 Macro: 0.2415 | Validation F1 Macro : 0.2679
Training F1 Micro: 0.3309 | Validation F1 Micro : 0.3700
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.8554e-01 | Validation Loss : 1.5818e-01
Training CC : 0.5988 | Validation CC : 0.6271
** Classification Losses **
Training Loss : 1.2650e+00 | Validation Loss : 1.3112e+00
Training F1 Macro: 0.2714 | Validation F1 Macro : 0.2812
Training F1 Micro: 0.3893 | Validation F1 Micro : 0.3860
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.4537e-01 | Validation Loss : 1.2625e-01
Training CC : 0.6563 | Validation CC : 0.6828
** Classification Losses **
Training Loss : 1.2729e+00 | Validation Loss : 1.3114e+00
Training F1 Macro: 0.2713 | Validation F1 Macro : 0.2576
Training F1 Micro: 0.3737 | Validation F1 Micro : 0.3660
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.1662e-01 | Validation Loss : 1.0183e-01
Training CC : 0.7088 | Validation CC : 0.7323
** Classification Losses **
Training Loss : 1.2934e+00 | Validation Loss : 1.3240e+00
Training F1 Macro: 0.2778 | Validation F1 Macro : 0.2733
Training F1 Micro: 0.3707 | Validation F1 Micro : 0.3760
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0447e-01 | Validation Loss : 1.0769e-01
Training CC : 0.7259 | Validation CC : 0.7014
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.1623e+00 | Validation Loss : 1.0760e+00
Training F1 Macro: 0.4719 | Validation F1 Macro : 0.5255
Training F1 Micro: 0.5318 | Validation F1 Micro : 0.5520
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0841e-01 | Validation Loss : 1.0686e-01
Training CC : 0.7080 | Validation CC : 0.7095
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.8801e-01 | Validation Loss : 9.4371e-01
Training F1 Macro: 0.6713 | Validation F1 Macro : 0.6376
Training F1 Micro: 0.6976 | Validation F1 Micro : 0.6440
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0745e-01 | Validation Loss : 1.0720e-01
Training CC : 0.7152 | Validation CC : 0.7087
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.3358e-01 | Validation Loss : 8.6719e-01
Training F1 Macro: 0.7889 | Validation F1 Macro : 0.6717
Training F1 Micro: 0.7942 | Validation F1 Micro : 0.6760
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0908e-01 | Validation Loss : 1.0965e-01
Training CC : 0.7062 | Validation CC : 0.6930
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.2341e-01 | Validation Loss : 8.1722e-01
Training F1 Macro: 0.7581 | Validation F1 Macro : 0.7013
Training F1 Micro: 0.7898 | Validation F1 Micro : 0.6980
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.1073e-01 | Validation Loss : 1.0976e-01
Training CC : 0.6955 | Validation CC : 0.6907
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.8341e-01 | Validation Loss : 7.2555e-01
Training F1 Macro: 0.7972 | Validation F1 Macro : 0.7300
Training F1 Micro: 0.7905 | Validation F1 Micro : 0.7320
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.9848e-02 | Validation Loss : 8.5997e-02
Training CC : 0.7345 | Validation CC : 0.7692
** Classification Losses **
Training Loss : 6.7762e-01 | Validation Loss : 7.4240e-01
Training F1 Macro: 0.7325 | Validation F1 Macro : 0.7253
Training F1 Micro: 0.7137 | Validation F1 Micro : 0.7240
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 7.8518e-02 | Validation Loss : 6.9528e-02
Training CC : 0.7918 | Validation CC : 0.8015
** Classification Losses **
Training Loss : 6.2644e-01 | Validation Loss : 7.6529e-01
Training F1 Macro: 0.7705 | Validation F1 Macro : 0.7077
Training F1 Micro: 0.7602 | Validation F1 Micro : 0.7120
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.4626e-02 | Validation Loss : 5.7441e-02
Training CC : 0.8153 | Validation CC : 0.8266
** Classification Losses **
Training Loss : 5.7774e-01 | Validation Loss : 8.0036e-01
Training F1 Macro: 0.7930 | Validation F1 Macro : 0.6928
Training F1 Micro: 0.7981 | Validation F1 Micro : 0.6980
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.4037e-02 | Validation Loss : 4.9130e-02
Training CC : 0.8365 | Validation CC : 0.8387
** Classification Losses **
Training Loss : 6.1979e-01 | Validation Loss : 7.7575e-01
Training F1 Macro: 0.7651 | Validation F1 Macro : 0.7106
Training F1 Micro: 0.7648 | Validation F1 Micro : 0.7120
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.6103e-02 | Validation Loss : 4.3542e-02
Training CC : 0.8477 | Validation CC : 0.8482
** Classification Losses **
Training Loss : 6.1804e-01 | Validation Loss : 8.1084e-01
Training F1 Macro: 0.7774 | Validation F1 Macro : 0.6801
Training F1 Micro: 0.7797 | Validation F1 Micro : 0.6820
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.3638e-02 | Validation Loss : 4.4851e-02
Training CC : 0.8510 | Validation CC : 0.8422
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.9661e-01 | Validation Loss : 7.3973e-01
Training F1 Macro: 0.7921 | Validation F1 Macro : 0.7034
Training F1 Micro: 0.7744 | Validation F1 Micro : 0.7040
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.6112e-02 | Validation Loss : 4.7793e-02
Training CC : 0.8402 | Validation CC : 0.8299
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.3102e-01 | Validation Loss : 6.4787e-01
Training F1 Macro: 0.7897 | Validation F1 Macro : 0.7679
Training F1 Micro: 0.8029 | Validation F1 Micro : 0.7700
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8361e-02 | Validation Loss : 4.8435e-02
Training CC : 0.8313 | Validation CC : 0.8272
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.9740e-01 | Validation Loss : 5.9958e-01
Training F1 Macro: 0.7805 | Validation F1 Macro : 0.7617
Training F1 Micro: 0.7759 | Validation F1 Micro : 0.7620
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8754e-02 | Validation Loss : 4.8849e-02
Training CC : 0.8296 | Validation CC : 0.8254
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3181e-01 | Validation Loss : 6.2168e-01
Training F1 Macro: 0.8230 | Validation F1 Macro : 0.7388
Training F1 Micro: 0.8208 | Validation F1 Micro : 0.7400
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.8708e-02 | Validation Loss : 4.9046e-02
Training CC : 0.8293 | Validation CC : 0.8246
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9616e-01 | Validation Loss : 6.0446e-01
Training F1 Macro: 0.7990 | Validation F1 Macro : 0.7330
Training F1 Micro: 0.8060 | Validation F1 Micro : 0.7360
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.5001e-02 | Validation Loss : 4.2244e-02
Training CC : 0.8426 | Validation CC : 0.8501
** Classification Losses **
Training Loss : 4.5728e-01 | Validation Loss : 5.8587e-01
Training F1 Macro: 0.7718 | Validation F1 Macro : 0.7523
Training F1 Micro: 0.7813 | Validation F1 Micro : 0.7520
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.0152e-02 | Validation Loss : 3.9056e-02
Training CC : 0.8595 | Validation CC : 0.8602
** Classification Losses **
Training Loss : 3.9341e-01 | Validation Loss : 5.9964e-01
Training F1 Macro: 0.8125 | Validation F1 Macro : 0.7616
Training F1 Micro: 0.8148 | Validation F1 Micro : 0.7620
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.8256e-02 | Validation Loss : 3.7600e-02
Training CC : 0.8661 | Validation CC : 0.8653
** Classification Losses **
Training Loss : 4.4561e-01 | Validation Loss : 5.7021e-01
Training F1 Macro: 0.7849 | Validation F1 Macro : 0.7608
Training F1 Micro: 0.7857 | Validation F1 Micro : 0.7640
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6869e-02 | Validation Loss : 3.6300e-02
Training CC : 0.8711 | Validation CC : 0.8704
** Classification Losses **
Training Loss : 4.1690e-01 | Validation Loss : 6.1780e-01
Training F1 Macro: 0.8445 | Validation F1 Macro : 0.7392
Training F1 Micro: 0.8479 | Validation F1 Micro : 0.7400
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4999e-02 | Validation Loss : 3.5349e-02
Training CC : 0.8769 | Validation CC : 0.8746
** Classification Losses **
Training Loss : 4.6575e-01 | Validation Loss : 6.3090e-01
Training F1 Macro: 0.7956 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.8033 | Validation F1 Micro : 0.7260
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4498e-02 | Validation Loss : 3.5722e-02
Training CC : 0.8792 | Validation CC : 0.8731
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0582e-01 | Validation Loss : 5.7424e-01
Training F1 Macro: 0.8100 | Validation F1 Macro : 0.7544
Training F1 Micro: 0.8236 | Validation F1 Micro : 0.7540
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5376e-02 | Validation Loss : 3.7404e-02
Training CC : 0.8758 | Validation CC : 0.8666
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1675e-01 | Validation Loss : 6.1231e-01
Training F1 Macro: 0.8081 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.8030 | Validation F1 Micro : 0.7220
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.7303e-02 | Validation Loss : 3.9214e-02
Training CC : 0.8686 | Validation CC : 0.8598
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3818e-01 | Validation Loss : 5.8021e-01
Training F1 Macro: 0.7671 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.7641 | Validation F1 Micro : 0.7260
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9055e-02 | Validation Loss : 4.0173e-02
Training CC : 0.8625 | Validation CC : 0.8563
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1923e-01 | Validation Loss : 5.7177e-01
Training F1 Macro: 0.8090 | Validation F1 Macro : 0.7289
Training F1 Micro: 0.8112 | Validation F1 Micro : 0.7260
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9735e-02 | Validation Loss : 4.0570e-02
Training CC : 0.8603 | Validation CC : 0.8550
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4773e-01 | Validation Loss : 5.5785e-01
Training F1 Macro: 0.7528 | Validation F1 Macro : 0.7514
Training F1 Micro: 0.7577 | Validation F1 Micro : 0.7520
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.7191e-02 | Validation Loss : 3.6425e-02
Training CC : 0.8703 | Validation CC : 0.8718
** Classification Losses **
Training Loss : 3.4927e-01 | Validation Loss : 5.3279e-01
Training F1 Macro: 0.8301 | Validation F1 Macro : 0.7671
Training F1 Micro: 0.8272 | Validation F1 Micro : 0.7660
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4727e-02 | Validation Loss : 3.4655e-02
Training CC : 0.8787 | Validation CC : 0.8767
** Classification Losses **
Training Loss : 3.2308e-01 | Validation Loss : 5.5849e-01
Training F1 Macro: 0.8355 | Validation F1 Macro : 0.7406
Training F1 Micro: 0.8463 | Validation F1 Micro : 0.7400
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3306e-02 | Validation Loss : 3.3829e-02
Training CC : 0.8830 | Validation CC : 0.8797
** Classification Losses **
Training Loss : 4.3147e-01 | Validation Loss : 5.6773e-01
Training F1 Macro: 0.7407 | Validation F1 Macro : 0.7748
Training F1 Micro: 0.7526 | Validation F1 Micro : 0.7760
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2902e-02 | Validation Loss : 3.2974e-02
Training CC : 0.8857 | Validation CC : 0.8830
** Classification Losses **
Training Loss : 4.0616e-01 | Validation Loss : 6.2277e-01
Training F1 Macro: 0.8038 | Validation F1 Macro : 0.7282
Training F1 Micro: 0.8121 | Validation F1 Micro : 0.7260
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2151e-02 | Validation Loss : 3.2377e-02
Training CC : 0.8887 | Validation CC : 0.8853
** Classification Losses **
Training Loss : 3.6599e-01 | Validation Loss : 5.9538e-01
Training F1 Macro: 0.8099 | Validation F1 Macro : 0.7403
Training F1 Micro: 0.8174 | Validation F1 Micro : 0.7420
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1322e-02 | Validation Loss : 3.2505e-02
Training CC : 0.8906 | Validation CC : 0.8848
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2123e-01 | Validation Loss : 5.8534e-01
Training F1 Macro: 0.8674 | Validation F1 Macro : 0.7330
Training F1 Micro: 0.8692 | Validation F1 Micro : 0.7340
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2075e-02 | Validation Loss : 3.2815e-02
Training CC : 0.8891 | Validation CC : 0.8835
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5315e-01 | Validation Loss : 5.3318e-01
Training F1 Macro: 0.8259 | Validation F1 Macro : 0.7560
Training F1 Micro: 0.8310 | Validation F1 Micro : 0.7580
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1994e-02 | Validation Loss : 3.3414e-02
Training CC : 0.8884 | Validation CC : 0.8813
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8063e-01 | Validation Loss : 5.2900e-01
Training F1 Macro: 0.7500 | Validation F1 Macro : 0.7658
Training F1 Micro: 0.7408 | Validation F1 Micro : 0.7640
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2494e-02 | Validation Loss : 3.4041e-02
Training CC : 0.8863 | Validation CC : 0.8789
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4206e-01 | Validation Loss : 5.0688e-01
Training F1 Macro: 0.8287 | Validation F1 Macro : 0.7717
Training F1 Micro: 0.8345 | Validation F1 Micro : 0.7720
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3152e-02 | Validation Loss : 3.4650e-02
Training CC : 0.8841 | Validation CC : 0.8767
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3490e-01 | Validation Loss : 5.4066e-01
Training F1 Macro: 0.7546 | Validation F1 Macro : 0.7663
Training F1 Micro: 0.7536 | Validation F1 Micro : 0.7640
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2539e-02 | Validation Loss : 3.2781e-02
Training CC : 0.8861 | Validation CC : 0.8843
** Classification Losses **
Training Loss : 3.6295e-01 | Validation Loss : 5.3258e-01
Training F1 Macro: 0.8121 | Validation F1 Macro : 0.7603
Training F1 Micro: 0.8123 | Validation F1 Micro : 0.7580
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1381e-02 | Validation Loss : 3.2108e-02
Training CC : 0.8910 | Validation CC : 0.8865
** Classification Losses **
Training Loss : 3.9501e-01 | Validation Loss : 5.3942e-01
Training F1 Macro: 0.8360 | Validation F1 Macro : 0.7778
Training F1 Micro: 0.8327 | Validation F1 Micro : 0.7780
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0693e-02 | Validation Loss : 3.1463e-02
Training CC : 0.8933 | Validation CC : 0.8887
** Classification Losses **
Training Loss : 4.2422e-01 | Validation Loss : 5.8430e-01
Training F1 Macro: 0.7876 | Validation F1 Macro : 0.7294
Training F1 Micro: 0.7809 | Validation F1 Micro : 0.7280
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9897e-02 | Validation Loss : 3.0923e-02
Training CC : 0.8958 | Validation CC : 0.8907
** Classification Losses **
Training Loss : 4.2439e-01 | Validation Loss : 5.4454e-01
Training F1 Macro: 0.7935 | Validation F1 Macro : 0.7676
Training F1 Micro: 0.8096 | Validation F1 Micro : 0.7680
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9733e-02 | Validation Loss : 3.0519e-02
Training CC : 0.8971 | Validation CC : 0.8923
** Classification Losses **
Training Loss : 3.6857e-01 | Validation Loss : 5.5548e-01
Training F1 Macro: 0.8033 | Validation F1 Macro : 0.7657
Training F1 Micro: 0.8085 | Validation F1 Micro : 0.7640
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9480e-02 | Validation Loss : 3.0610e-02
Training CC : 0.8980 | Validation CC : 0.8919
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0776e-01 | Validation Loss : 5.3247e-01
Training F1 Macro: 0.8045 | Validation F1 Macro : 0.7700
Training F1 Micro: 0.8083 | Validation F1 Micro : 0.7620
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9476e-02 | Validation Loss : 3.0834e-02
Training CC : 0.8978 | Validation CC : 0.8911
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4073e-01 | Validation Loss : 5.4553e-01
Training F1 Macro: 0.8701 | Validation F1 Macro : 0.7421
Training F1 Micro: 0.8648 | Validation F1 Micro : 0.7420
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9886e-02 | Validation Loss : 3.1222e-02
Training CC : 0.8965 | Validation CC : 0.8896
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2539e-01 | Validation Loss : 5.7698e-01
Training F1 Macro: 0.8368 | Validation F1 Macro : 0.7252
Training F1 Micro: 0.8269 | Validation F1 Micro : 0.7220
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0336e-02 | Validation Loss : 3.1567e-02
Training CC : 0.8950 | Validation CC : 0.8883
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0816e-01 | Validation Loss : 5.7733e-01
Training F1 Macro: 0.7957 | Validation F1 Macro : 0.7274
Training F1 Micro: 0.7946 | Validation F1 Micro : 0.7260
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0672e-02 | Validation Loss : 3.1949e-02
Training CC : 0.8939 | Validation CC : 0.8869
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8959e-01 | Validation Loss : 5.9796e-01
Training F1 Macro: 0.7661 | Validation F1 Macro : 0.7156
Training F1 Micro: 0.7810 | Validation F1 Micro : 0.7120
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9864e-02 | Validation Loss : 3.0897e-02
Training CC : 0.8960 | Validation CC : 0.8915
** Classification Losses **
Training Loss : 3.6639e-01 | Validation Loss : 5.3497e-01
Training F1 Macro: 0.7724 | Validation F1 Macro : 0.7551
Training F1 Micro: 0.7844 | Validation F1 Micro : 0.7540
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9661e-02 | Validation Loss : 3.0313e-02
Training CC : 0.8983 | Validation CC : 0.8930
** Classification Losses **
Training Loss : 4.4387e-01 | Validation Loss : 5.6478e-01
Training F1 Macro: 0.7502 | Validation F1 Macro : 0.7161
Training F1 Micro: 0.7686 | Validation F1 Micro : 0.7140
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8763e-02 | Validation Loss : 2.9850e-02
Training CC : 0.9004 | Validation CC : 0.8950
** Classification Losses **
Training Loss : 3.5521e-01 | Validation Loss : 5.2981e-01
Training F1 Macro: 0.8184 | Validation F1 Macro : 0.7486
Training F1 Micro: 0.8195 | Validation F1 Micro : 0.7480
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8257e-02 | Validation Loss : 2.9525e-02
Training CC : 0.9021 | Validation CC : 0.8960
** Classification Losses **
Training Loss : 3.1635e-01 | Validation Loss : 5.5993e-01
Training F1 Macro: 0.8139 | Validation F1 Macro : 0.7406
Training F1 Micro: 0.8167 | Validation F1 Micro : 0.7400
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7977e-02 | Validation Loss : 2.9308e-02
Training CC : 0.9033 | Validation CC : 0.8971
** Classification Losses **
Training Loss : 3.4621e-01 | Validation Loss : 5.5908e-01
Training F1 Macro: 0.8199 | Validation F1 Macro : 0.7361
Training F1 Micro: 0.8284 | Validation F1 Micro : 0.7340
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7886e-02 | Validation Loss : 2.9374e-02
Training CC : 0.9038 | Validation CC : 0.8968
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8185e-01 | Validation Loss : 5.6433e-01
Training F1 Macro: 0.7939 | Validation F1 Macro : 0.7432
Training F1 Micro: 0.7952 | Validation F1 Micro : 0.7420
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7868e-02 | Validation Loss : 2.9491e-02
Training CC : 0.9036 | Validation CC : 0.8963
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1505e-01 | Validation Loss : 5.8620e-01
Training F1 Macro: 0.7847 | Validation F1 Macro : 0.7104
Training F1 Micro: 0.7906 | Validation F1 Micro : 0.7060
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8187e-02 | Validation Loss : 2.9665e-02
Training CC : 0.9027 | Validation CC : 0.8956
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3706e-01 | Validation Loss : 5.6509e-01
Training F1 Macro: 0.8159 | Validation F1 Macro : 0.7379
Training F1 Micro: 0.8167 | Validation F1 Micro : 0.7360
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8176e-02 | Validation Loss : 2.9870e-02
Training CC : 0.9024 | Validation CC : 0.8948
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3758e-01 | Validation Loss : 5.8234e-01
Training F1 Macro: 0.8206 | Validation F1 Macro : 0.7355
Training F1 Micro: 0.8137 | Validation F1 Micro : 0.7300
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8685e-02 | Validation Loss : 3.0098e-02
Training CC : 0.9010 | Validation CC : 0.8939
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0966e-01 | Validation Loss : 5.7138e-01
Training F1 Macro: 0.7837 | Validation F1 Macro : 0.7617
Training F1 Micro: 0.7767 | Validation F1 Micro : 0.7560
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8133e-02 | Validation Loss : 2.9473e-02
Training CC : 0.9026 | Validation CC : 0.8964
** Classification Losses **
Training Loss : 3.6566e-01 | Validation Loss : 5.7259e-01
Training F1 Macro: 0.8262 | Validation F1 Macro : 0.7437
Training F1 Micro: 0.8286 | Validation F1 Micro : 0.7340
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7955e-02 | Validation Loss : 2.8989e-02
Training CC : 0.9040 | Validation CC : 0.8980
** Classification Losses **
Training Loss : 3.4647e-01 | Validation Loss : 5.3361e-01
Training F1 Macro: 0.8319 | Validation F1 Macro : 0.7737
Training F1 Micro: 0.8295 | Validation F1 Micro : 0.7700
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7835e-02 | Validation Loss : 2.8595e-02
Training CC : 0.9050 | Validation CC : 0.8994
** Classification Losses **
Training Loss : 4.0851e-01 | Validation Loss : 5.6007e-01
Training F1 Macro: 0.7563 | Validation F1 Macro : 0.7410
Training F1 Micro: 0.7780 | Validation F1 Micro : 0.7360
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7275e-02 | Validation Loss : 2.8319e-02
Training CC : 0.9066 | Validation CC : 0.9005
** Classification Losses **
Training Loss : 4.0125e-01 | Validation Loss : 6.0968e-01
Training F1 Macro: 0.8153 | Validation F1 Macro : 0.7144
Training F1 Micro: 0.8192 | Validation F1 Micro : 0.7040
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6399e-02 | Validation Loss : 2.8126e-02
Training CC : 0.9086 | Validation CC : 0.9013
** Classification Losses **
Training Loss : 3.3738e-01 | Validation Loss : 5.4721e-01
Training F1 Macro: 0.8238 | Validation F1 Macro : 0.7395
Training F1 Micro: 0.8162 | Validation F1 Micro : 0.7320
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6321e-02 | Validation Loss : 2.8192e-02
Training CC : 0.9092 | Validation CC : 0.9010
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9111e-01 | Validation Loss : 5.7770e-01
Training F1 Macro: 0.8233 | Validation F1 Macro : 0.7215
Training F1 Micro: 0.8141 | Validation F1 Micro : 0.7180
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6711e-02 | Validation Loss : 2.8385e-02
Training CC : 0.9083 | Validation CC : 0.9003
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9576e-01 | Validation Loss : 5.6606e-01
Training F1 Macro: 0.8119 | Validation F1 Macro : 0.7444
Training F1 Micro: 0.8107 | Validation F1 Micro : 0.7420
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6616e-02 | Validation Loss : 2.8649e-02
Training CC : 0.9080 | Validation CC : 0.8993
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.8082e-01 | Validation Loss : 5.5558e-01
Training F1 Macro: 0.8744 | Validation F1 Macro : 0.7378
Training F1 Micro: 0.8670 | Validation F1 Micro : 0.7360
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6830e-02 | Validation Loss : 2.8926e-02
Training CC : 0.9071 | Validation CC : 0.8982
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5800e-01 | Validation Loss : 5.7917e-01
Training F1 Macro: 0.8171 | Validation F1 Macro : 0.7231
Training F1 Micro: 0.8257 | Validation F1 Micro : 0.7220
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7226e-02 | Validation Loss : 2.9069e-02
Training CC : 0.9061 | Validation CC : 0.8977
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4479e-01 | Validation Loss : 5.8163e-01
Training F1 Macro: 0.7994 | Validation F1 Macro : 0.7230
Training F1 Micro: 0.7998 | Validation F1 Micro : 0.7220
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7198e-02 | Validation Loss : 2.8561e-02
Training CC : 0.9067 | Validation CC : 0.9004
** Classification Losses **
Training Loss : 3.7811e-01 | Validation Loss : 5.8767e-01
Training F1 Macro: 0.7964 | Validation F1 Macro : 0.7193
Training F1 Micro: 0.7980 | Validation F1 Micro : 0.7140
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6516e-02 | Validation Loss : 2.8020e-02
Training CC : 0.9088 | Validation CC : 0.9016
** Classification Losses **
Training Loss : 4.1914e-01 | Validation Loss : 5.8116e-01
Training F1 Macro: 0.7700 | Validation F1 Macro : 0.7127
Training F1 Micro: 0.7737 | Validation F1 Micro : 0.7080
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6021e-02 | Validation Loss : 2.7814e-02
Training CC : 0.9102 | Validation CC : 0.9025
** Classification Losses **
Training Loss : 4.1175e-01 | Validation Loss : 5.6561e-01
Training F1 Macro: 0.7598 | Validation F1 Macro : 0.7360
Training F1 Micro: 0.7609 | Validation F1 Micro : 0.7340
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6548e-02 | Validation Loss : 2.7742e-02
Training CC : 0.9099 | Validation CC : 0.9030
** Classification Losses **
Training Loss : 3.9061e-01 | Validation Loss : 5.6782e-01
Training F1 Macro: 0.7977 | Validation F1 Macro : 0.7262
Training F1 Micro: 0.7885 | Validation F1 Micro : 0.7260
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5753e-02 | Validation Loss : 2.7541e-02
Training CC : 0.9114 | Validation CC : 0.9035
** Classification Losses **
Training Loss : 4.8085e-01 | Validation Loss : 5.9064e-01
Training F1 Macro: 0.7411 | Validation F1 Macro : 0.7206
Training F1 Micro: 0.7444 | Validation F1 Micro : 0.7200
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5622e-02 | Validation Loss : 2.7652e-02
Training CC : 0.9118 | Validation CC : 0.9030
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3577e-01 | Validation Loss : 5.8968e-01
Training F1 Macro: 0.8387 | Validation F1 Macro : 0.7438
Training F1 Micro: 0.8344 | Validation F1 Micro : 0.7440
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5742e-02 | Validation Loss : 2.7759e-02
Training CC : 0.9115 | Validation CC : 0.9026
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1239e-01 | Validation Loss : 5.5796e-01
Training F1 Macro: 0.8029 | Validation F1 Macro : 0.7428
Training F1 Micro: 0.8018 | Validation F1 Micro : 0.7420
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5942e-02 | Validation Loss : 2.7967e-02
Training CC : 0.9109 | Validation CC : 0.9018
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4829e-01 | Validation Loss : 5.4162e-01
Training F1 Macro: 0.7822 | Validation F1 Macro : 0.7636
Training F1 Micro: 0.7751 | Validation F1 Micro : 0.7600
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5947e-02 | Validation Loss : 2.8261e-02
Training CC : 0.9104 | Validation CC : 0.9008
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1830e-01 | Validation Loss : 5.9310e-01
Training F1 Macro: 0.7559 | Validation F1 Macro : 0.7148
Training F1 Micro: 0.7586 | Validation F1 Micro : 0.7120
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6543e-02 | Validation Loss : 2.8862e-02
Training CC : 0.9086 | Validation CC : 0.8986
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5438e-01 | Validation Loss : 6.1306e-01
Training F1 Macro: 0.7639 | Validation F1 Macro : 0.7018
Training F1 Micro: 0.7511 | Validation F1 Micro : 0.6880
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6982e-02 | Validation Loss : 2.7870e-02
Training CC : 0.9085 | Validation CC : 0.9027
** Classification Losses **
Training Loss : 3.4269e-01 | Validation Loss : 5.8857e-01
Training F1 Macro: 0.8384 | Validation F1 Macro : 0.7358
Training F1 Micro: 0.8338 | Validation F1 Micro : 0.7280
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5377e-02 | Validation Loss : 2.7387e-02
Training CC : 0.9122 | Validation CC : 0.9039
** Classification Losses **
Training Loss : 4.3430e-01 | Validation Loss : 6.0428e-01
Training F1 Macro: 0.7817 | Validation F1 Macro : 0.7235
Training F1 Micro: 0.7860 | Validation F1 Micro : 0.7160
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5256e-02 | Validation Loss : 2.7169e-02
Training CC : 0.9131 | Validation CC : 0.9050
** Classification Losses **
Training Loss : 4.2280e-01 | Validation Loss : 6.3178e-01
Training F1 Macro: 0.7818 | Validation F1 Macro : 0.6894
Training F1 Micro: 0.7859 | Validation F1 Micro : 0.6820
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5307e-02 | Validation Loss : 2.7022e-02
Training CC : 0.9135 | Validation CC : 0.9054
** Classification Losses **
Training Loss : 3.9982e-01 | Validation Loss : 6.1503e-01
Training F1 Macro: 0.7908 | Validation F1 Macro : 0.7105
Training F1 Micro: 0.7981 | Validation F1 Micro : 0.7020
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4781e-02 | Validation Loss : 2.6784e-02
Training CC : 0.9148 | Validation CC : 0.9062
** Classification Losses **
Training Loss : 3.5785e-01 | Validation Loss : 6.0577e-01
Training F1 Macro: 0.8074 | Validation F1 Macro : 0.7166
Training F1 Micro: 0.8216 | Validation F1 Micro : 0.7100
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4514e-02 | Validation Loss : 2.6902e-02
Training CC : 0.9155 | Validation CC : 0.9057
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8122e-01 | Validation Loss : 5.6625e-01
Training F1 Macro: 0.8043 | Validation F1 Macro : 0.7525
Training F1 Micro: 0.8127 | Validation F1 Micro : 0.7460
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4698e-02 | Validation Loss : 2.7128e-02
Training CC : 0.9148 | Validation CC : 0.9049
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1326e-01 | Validation Loss : 5.8101e-01
Training F1 Macro: 0.8411 | Validation F1 Macro : 0.7479
Training F1 Micro: 0.8385 | Validation F1 Micro : 0.7440
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5287e-02 | Validation Loss : 2.7312e-02
Training CC : 0.9137 | Validation CC : 0.9043
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2945e-01 | Validation Loss : 6.0456e-01
Training F1 Macro: 0.8147 | Validation F1 Macro : 0.7219
Training F1 Micro: 0.8203 | Validation F1 Micro : 0.7220
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5205e-02 | Validation Loss : 2.7750e-02
Training CC : 0.9132 | Validation CC : 0.9026
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0264e-01 | Validation Loss : 5.6453e-01
Training F1 Macro: 0.7756 | Validation F1 Macro : 0.7487
Training F1 Micro: 0.7757 | Validation F1 Micro : 0.7460
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5776e-02 | Validation Loss : 2.8218e-02
Training CC : 0.9113 | Validation CC : 0.9009
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4699e-01 | Validation Loss : 6.2755e-01
Training F1 Macro: 0.8289 | Validation F1 Macro : 0.7175
Training F1 Micro: 0.8260 | Validation F1 Micro : 0.7140
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5652e-02 | Validation Loss : 2.7535e-02
Training CC : 0.9120 | Validation CC : 0.9040
** Classification Losses **
Training Loss : 3.8139e-01 | Validation Loss : 5.7442e-01
Training F1 Macro: 0.7911 | Validation F1 Macro : 0.7403
Training F1 Micro: 0.7894 | Validation F1 Micro : 0.7400
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4928e-02 | Validation Loss : 2.6852e-02
Training CC : 0.9143 | Validation CC : 0.9060
** Classification Losses **
Training Loss : 3.7641e-01 | Validation Loss : 5.9360e-01
Training F1 Macro: 0.7929 | Validation F1 Macro : 0.7289
Training F1 Micro: 0.7933 | Validation F1 Micro : 0.7260
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4995e-02 | Validation Loss : 2.6704e-02
Training CC : 0.9147 | Validation CC : 0.9068
** Classification Losses **
Training Loss : 3.1997e-01 | Validation Loss : 5.7383e-01
Training F1 Macro: 0.8382 | Validation F1 Macro : 0.7228
Training F1 Micro: 0.8457 | Validation F1 Micro : 0.7220
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4406e-02 | Validation Loss : 2.6619e-02
Training CC : 0.9162 | Validation CC : 0.9068
** Classification Losses **
Training Loss : 4.0349e-01 | Validation Loss : 6.2407e-01
Training F1 Macro: 0.7741 | Validation F1 Macro : 0.7266
Training F1 Micro: 0.7744 | Validation F1 Micro : 0.7220
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4416e-02 | Validation Loss : 2.6438e-02
Training CC : 0.9165 | Validation CC : 0.9076
** Classification Losses **
Training Loss : 4.3873e-01 | Validation Loss : 5.6721e-01
Training F1 Macro: 0.7804 | Validation F1 Macro : 0.7388
Training F1 Micro: 0.7840 | Validation F1 Micro : 0.7360
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4064e-02 | Validation Loss : 2.6489e-02
Training CC : 0.9173 | Validation CC : 0.9074
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6412e-01 | Validation Loss : 6.0130e-01
Training F1 Macro: 0.8156 | Validation F1 Macro : 0.7164
Training F1 Micro: 0.8159 | Validation F1 Micro : 0.7140
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4205e-02 | Validation Loss : 2.6595e-02
Training CC : 0.9169 | Validation CC : 0.9070
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7951e-01 | Validation Loss : 5.8434e-01
Training F1 Macro: 0.8102 | Validation F1 Macro : 0.7205
Training F1 Micro: 0.8149 | Validation F1 Micro : 0.7220
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4155e-02 | Validation Loss : 2.6746e-02
Training CC : 0.9168 | Validation CC : 0.9064
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3628e-01 | Validation Loss : 6.1818e-01
Training F1 Macro: 0.7849 | Validation F1 Macro : 0.6990
Training F1 Micro: 0.8151 | Validation F1 Micro : 0.6960
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4510e-02 | Validation Loss : 2.6973e-02
Training CC : 0.9159 | Validation CC : 0.9056
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5576e-01 | Validation Loss : 5.5676e-01
Training F1 Macro: 0.8165 | Validation F1 Macro : 0.7420
Training F1 Micro: 0.8146 | Validation F1 Micro : 0.7400
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4632e-02 | Validation Loss : 2.7321e-02
Training CC : 0.9151 | Validation CC : 0.9043
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9127e-01 | Validation Loss : 5.8666e-01
Training F1 Macro: 0.7952 | Validation F1 Macro : 0.7169
Training F1 Micro: 0.7999 | Validation F1 Micro : 0.7140
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4467e-02 | Validation Loss : 2.6708e-02
Training CC : 0.9157 | Validation CC : 0.9067
** Classification Losses **
Training Loss : 3.5679e-01 | Validation Loss : 5.7074e-01
Training F1 Macro: 0.8365 | Validation F1 Macro : 0.7229
Training F1 Micro: 0.8293 | Validation F1 Micro : 0.7220
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4435e-02 | Validation Loss : 2.6504e-02
Training CC : 0.9165 | Validation CC : 0.9072
** Classification Losses **
Training Loss : 3.9320e-01 | Validation Loss : 5.4624e-01
Training F1 Macro: 0.7667 | Validation F1 Macro : 0.7362
Training F1 Micro: 0.7671 | Validation F1 Micro : 0.7340
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4058e-02 | Validation Loss : 2.6240e-02
Training CC : 0.9177 | Validation CC : 0.9084
** Classification Losses **
Training Loss : 3.7983e-01 | Validation Loss : 5.8770e-01
Training F1 Macro: 0.8017 | Validation F1 Macro : 0.7330
Training F1 Micro: 0.8127 | Validation F1 Micro : 0.7320
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4293e-02 | Validation Loss : 2.6080e-02
Training CC : 0.9178 | Validation CC : 0.9088
** Classification Losses **
Training Loss : 3.2941e-01 | Validation Loss : 5.4766e-01
Training F1 Macro: 0.8308 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.8350 | Validation F1 Micro : 0.7240
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3697e-02 | Validation Loss : 2.6027e-02
Training CC : 0.9187 | Validation CC : 0.9093
** Classification Losses **
Training Loss : 3.7806e-01 | Validation Loss : 5.9362e-01
Training F1 Macro: 0.8027 | Validation F1 Macro : 0.7129
Training F1 Micro: 0.8003 | Validation F1 Micro : 0.7080
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3453e-02 | Validation Loss : 2.6097e-02
Training CC : 0.9194 | Validation CC : 0.9090
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7476e-01 | Validation Loss : 6.2428e-01
Training F1 Macro: 0.8231 | Validation F1 Macro : 0.7009
Training F1 Micro: 0.8139 | Validation F1 Micro : 0.7000
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3477e-02 | Validation Loss : 2.6374e-02
Training CC : 0.9191 | Validation CC : 0.9080
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8624e-01 | Validation Loss : 5.8494e-01
Training F1 Macro: 0.8041 | Validation F1 Macro : 0.7385
Training F1 Micro: 0.8099 | Validation F1 Micro : 0.7380
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3803e-02 | Validation Loss : 2.6619e-02
Training CC : 0.9180 | Validation CC : 0.9070
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2768e-01 | Validation Loss : 5.9936e-01
Training F1 Macro: 0.7638 | Validation F1 Macro : 0.7127
Training F1 Micro: 0.7632 | Validation F1 Micro : 0.7060
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4035e-02 | Validation Loss : 2.6644e-02
Training CC : 0.9173 | Validation CC : 0.9069
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3958e-01 | Validation Loss : 5.3175e-01
Training F1 Macro: 0.7599 | Validation F1 Macro : 0.7611
Training F1 Micro: 0.7734 | Validation F1 Micro : 0.7600
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3959e-02 | Validation Loss : 2.6698e-02
Training CC : 0.9174 | Validation CC : 0.9067
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1188e-01 | Validation Loss : 5.3878e-01
Training F1 Macro: 0.7632 | Validation F1 Macro : 0.7382
Training F1 Micro: 0.7644 | Validation F1 Micro : 0.7360
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3953e-02 | Validation Loss : 2.6254e-02
Training CC : 0.9180 | Validation CC : 0.9085
** Classification Losses **
Training Loss : 3.8669e-01 | Validation Loss : 5.8408e-01
Training F1 Macro: 0.7682 | Validation F1 Macro : 0.7204
Training F1 Micro: 0.7742 | Validation F1 Micro : 0.7180
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3426e-02 | Validation Loss : 2.5961e-02
Training CC : 0.9194 | Validation CC : 0.9092
** Classification Losses **
Training Loss : 4.0640e-01 | Validation Loss : 5.6016e-01
Training F1 Macro: 0.8292 | Validation F1 Macro : 0.7247
Training F1 Micro: 0.8256 | Validation F1 Micro : 0.7200
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3661e-02 | Validation Loss : 2.5726e-02
Training CC : 0.9195 | Validation CC : 0.9101
** Classification Losses **
Training Loss : 3.5713e-01 | Validation Loss : 5.4621e-01
Training F1 Macro: 0.7937 | Validation F1 Macro : 0.7582
Training F1 Micro: 0.8130 | Validation F1 Micro : 0.7580
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3001e-02 | Validation Loss : 2.5678e-02
Training CC : 0.9209 | Validation CC : 0.9105
** Classification Losses **
Training Loss : 3.6447e-01 | Validation Loss : 5.5953e-01
Training F1 Macro: 0.7977 | Validation F1 Macro : 0.7349
Training F1 Micro: 0.8102 | Validation F1 Micro : 0.7340
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3248e-02 | Validation Loss : 2.5624e-02
Training CC : 0.9207 | Validation CC : 0.9106
** Classification Losses **
Training Loss : 3.7288e-01 | Validation Loss : 5.9653e-01
Training F1 Macro: 0.7802 | Validation F1 Macro : 0.7289
Training F1 Micro: 0.7860 | Validation F1 Micro : 0.7240
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2823e-02 | Validation Loss : 2.5754e-02
Training CC : 0.9216 | Validation CC : 0.9101
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4092e-01 | Validation Loss : 6.2177e-01
Training F1 Macro: 0.7982 | Validation F1 Macro : 0.7142
Training F1 Micro: 0.7965 | Validation F1 Micro : 0.7120
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3671e-02 | Validation Loss : 2.5884e-02
Training CC : 0.9199 | Validation CC : 0.9096
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2840e-01 | Validation Loss : 6.0474e-01
Training F1 Macro: 0.7446 | Validation F1 Macro : 0.7123
Training F1 Micro: 0.7439 | Validation F1 Micro : 0.7100
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3524e-02 | Validation Loss : 2.6077e-02
Training CC : 0.9199 | Validation CC : 0.9089
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.0897e-01 | Validation Loss : 6.1187e-01
Training F1 Macro: 0.8483 | Validation F1 Macro : 0.7093
Training F1 Micro: 0.8461 | Validation F1 Micro : 0.7060
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3653e-02 | Validation Loss : 2.6573e-02
Training CC : 0.9190 | Validation CC : 0.9071
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0184e-01 | Validation Loss : 6.0153e-01
Training F1 Macro: 0.7723 | Validation F1 Macro : 0.7158
Training F1 Micro: 0.7681 | Validation F1 Micro : 0.7120
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4039e-02 | Validation Loss : 2.7049e-02
Training CC : 0.9173 | Validation CC : 0.9053
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2186e-01 | Validation Loss : 5.9195e-01
Training F1 Macro: 0.7836 | Validation F1 Macro : 0.7104
Training F1 Micro: 0.7822 | Validation F1 Micro : 0.7080
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3883e-02 | Validation Loss : 2.6176e-02
Training CC : 0.9181 | Validation CC : 0.9087
** Classification Losses **
Training Loss : 3.3897e-01 | Validation Loss : 5.7113e-01
Training F1 Macro: 0.8108 | Validation F1 Macro : 0.7178
Training F1 Micro: 0.8088 | Validation F1 Micro : 0.7160
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3457e-02 | Validation Loss : 2.5850e-02
Training CC : 0.9197 | Validation CC : 0.9097
** Classification Losses **
Training Loss : 4.0861e-01 | Validation Loss : 5.7446e-01
Training F1 Macro: 0.7692 | Validation F1 Macro : 0.7148
Training F1 Micro: 0.7621 | Validation F1 Micro : 0.7100
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3120e-02 | Validation Loss : 2.5724e-02
Training CC : 0.9207 | Validation CC : 0.9102
** Classification Losses **
Training Loss : 3.8243e-01 | Validation Loss : 5.9760e-01
Training F1 Macro: 0.8027 | Validation F1 Macro : 0.7116
Training F1 Micro: 0.8103 | Validation F1 Micro : 0.7080
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3150e-02 | Validation Loss : 2.5571e-02
Training CC : 0.9212 | Validation CC : 0.9109
** Classification Losses **
Training Loss : 3.9996e-01 | Validation Loss : 6.2988e-01
Training F1 Macro: 0.7901 | Validation F1 Macro : 0.7154
Training F1 Micro: 0.7951 | Validation F1 Micro : 0.7120
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3119e-02 | Validation Loss : 2.5315e-02
Training CC : 0.9217 | Validation CC : 0.9116
** Classification Losses **
Training Loss : 4.0290e-01 | Validation Loss : 6.0827e-01
Training F1 Macro: 0.7799 | Validation F1 Macro : 0.7042
Training F1 Micro: 0.7800 | Validation F1 Micro : 0.7000
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2705e-01 | Validation Loss : 1.7110e-01
Training CC : 0.2895 | Validation CC : 0.4784
** Classification Losses **
Training Loss : 1.4296e+00 | Validation Loss : 1.4379e+00
Training F1 Macro: 0.2159 | Validation F1 Macro : 0.2300
Training F1 Micro: 0.2618 | Validation F1 Micro : 0.2780
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.4663e-01 | Validation Loss : 1.1113e-01
Training CC : 0.5668 | Validation CC : 0.6580
** Classification Losses **
Training Loss : 1.4465e+00 | Validation Loss : 1.4491e+00
Training F1 Macro: 0.2175 | Validation F1 Macro : 0.2241
Training F1 Micro: 0.2676 | Validation F1 Micro : 0.2700
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.6528e-02 | Validation Loss : 7.8370e-02
Training CC : 0.7043 | Validation CC : 0.7430
** Classification Losses **
Training Loss : 1.4374e+00 | Validation Loss : 1.4724e+00
Training F1 Macro: 0.1878 | Validation F1 Macro : 0.1974
Training F1 Micro: 0.2274 | Validation F1 Micro : 0.2380
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.9934e-02 | Validation Loss : 6.0811e-02
Training CC : 0.7679 | Validation CC : 0.7858
** Classification Losses **
Training Loss : 1.4591e+00 | Validation Loss : 1.4892e+00
Training F1 Macro: 0.2048 | Validation F1 Macro : 0.1822
Training F1 Micro: 0.2412 | Validation F1 Micro : 0.2200
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.6493e-02 | Validation Loss : 5.2235e-02
Training CC : 0.8054 | Validation CC : 0.8131
** Classification Losses **
Training Loss : 1.5226e+00 | Validation Loss : 1.4661e+00
Training F1 Macro: 0.1947 | Validation F1 Macro : 0.2177
Training F1 Micro: 0.2311 | Validation F1 Micro : 0.2680
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.1552e-02 | Validation Loss : 5.3801e-02
Training CC : 0.8180 | Validation CC : 0.8078
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3123e+00 | Validation Loss : 1.1652e+00
Training F1 Macro: 0.3458 | Validation F1 Macro : 0.4451
Training F1 Micro: 0.3794 | Validation F1 Micro : 0.4580
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.3683e-02 | Validation Loss : 5.5586e-02
Training CC : 0.8110 | Validation CC : 0.8012
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.5226e-01 | Validation Loss : 9.9723e-01
Training F1 Macro: 0.6358 | Validation F1 Macro : 0.5841
Training F1 Micro: 0.6444 | Validation F1 Micro : 0.5880
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.4815e-02 | Validation Loss : 5.5791e-02
Training CC : 0.8070 | Validation CC : 0.8005
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.0536e-01 | Validation Loss : 8.8785e-01
Training F1 Macro: 0.7244 | Validation F1 Macro : 0.6715
Training F1 Micro: 0.7296 | Validation F1 Micro : 0.6800
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.4530e-02 | Validation Loss : 5.5986e-02
Training CC : 0.8073 | Validation CC : 0.7991
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.9603e-01 | Validation Loss : 7.9380e-01
Training F1 Macro: 0.7668 | Validation F1 Macro : 0.6725
Training F1 Micro: 0.7691 | Validation F1 Micro : 0.6780
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.5198e-02 | Validation Loss : 5.7264e-02
Training CC : 0.8043 | Validation CC : 0.7932
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.4089e-01 | Validation Loss : 7.5036e-01
Training F1 Macro: 0.7730 | Validation F1 Macro : 0.7074
Training F1 Micro: 0.7819 | Validation F1 Micro : 0.7140
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.2028e-02 | Validation Loss : 4.9477e-02
Training CC : 0.8142 | Validation CC : 0.8220
** Classification Losses **
Training Loss : 6.3270e-01 | Validation Loss : 7.5662e-01
Training F1 Macro: 0.7476 | Validation F1 Macro : 0.7020
Training F1 Micro: 0.7433 | Validation F1 Micro : 0.7060
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.6612e-02 | Validation Loss : 4.5484e-02
Training CC : 0.8340 | Validation CC : 0.8357
** Classification Losses **
Training Loss : 6.0504e-01 | Validation Loss : 7.8528e-01
Training F1 Macro: 0.7529 | Validation F1 Macro : 0.6780
Training F1 Micro: 0.7679 | Validation F1 Micro : 0.6840
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.4231e-02 | Validation Loss : 4.2851e-02
Training CC : 0.8439 | Validation CC : 0.8453
** Classification Losses **
Training Loss : 6.7146e-01 | Validation Loss : 7.7638e-01
Training F1 Macro: 0.7386 | Validation F1 Macro : 0.6776
Training F1 Micro: 0.7292 | Validation F1 Micro : 0.6840
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1373e-02 | Validation Loss : 4.0788e-02
Training CC : 0.8536 | Validation CC : 0.8531
** Classification Losses **
Training Loss : 6.1769e-01 | Validation Loss : 7.7043e-01
Training F1 Macro: 0.8110 | Validation F1 Macro : 0.7007
Training F1 Micro: 0.8130 | Validation F1 Micro : 0.7020
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9143e-02 | Validation Loss : 3.9150e-02
Training CC : 0.8611 | Validation CC : 0.8592
** Classification Losses **
Training Loss : 5.2986e-01 | Validation Loss : 7.8733e-01
Training F1 Macro: 0.8503 | Validation F1 Macro : 0.6689
Training F1 Micro: 0.8555 | Validation F1 Micro : 0.6780
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.8251e-02 | Validation Loss : 3.9440e-02
Training CC : 0.8644 | Validation CC : 0.8581
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.9256e-01 | Validation Loss : 7.0662e-01
Training F1 Macro: 0.7883 | Validation F1 Macro : 0.7175
Training F1 Micro: 0.7907 | Validation F1 Micro : 0.7180
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.8762e-02 | Validation Loss : 4.0373e-02
Training CC : 0.8626 | Validation CC : 0.8545
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.7322e-01 | Validation Loss : 7.1628e-01
Training F1 Macro: 0.7533 | Validation F1 Macro : 0.7167
Training F1 Micro: 0.7537 | Validation F1 Micro : 0.7160
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9751e-02 | Validation Loss : 4.1342e-02
Training CC : 0.8588 | Validation CC : 0.8507
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.2178e-01 | Validation Loss : 6.1471e-01
Training F1 Macro: 0.7793 | Validation F1 Macro : 0.7631
Training F1 Micro: 0.7832 | Validation F1 Micro : 0.7640
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.0705e-02 | Validation Loss : 4.1987e-02
Training CC : 0.8555 | Validation CC : 0.8483
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8844e-01 | Validation Loss : 5.9272e-01
Training F1 Macro: 0.7581 | Validation F1 Macro : 0.7708
Training F1 Micro: 0.7588 | Validation F1 Micro : 0.7680
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.1218e-02 | Validation Loss : 4.2263e-02
Training CC : 0.8537 | Validation CC : 0.8473
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4921e-01 | Validation Loss : 6.1386e-01
Training F1 Macro: 0.8533 | Validation F1 Macro : 0.7625
Training F1 Micro: 0.8547 | Validation F1 Micro : 0.7600
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9674e-02 | Validation Loss : 3.9346e-02
Training CC : 0.8595 | Validation CC : 0.8585
** Classification Losses **
Training Loss : 4.6761e-01 | Validation Loss : 6.2608e-01
Training F1 Macro: 0.7802 | Validation F1 Macro : 0.7434
Training F1 Micro: 0.7753 | Validation F1 Micro : 0.7460
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.7703e-02 | Validation Loss : 3.7543e-02
Training CC : 0.8669 | Validation CC : 0.8655
** Classification Losses **
Training Loss : 4.5910e-01 | Validation Loss : 6.1408e-01
Training F1 Macro: 0.7426 | Validation F1 Macro : 0.7599
Training F1 Micro: 0.7620 | Validation F1 Micro : 0.7580
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6339e-02 | Validation Loss : 3.6288e-02
Training CC : 0.8721 | Validation CC : 0.8702
** Classification Losses **
Training Loss : 4.3843e-01 | Validation Loss : 5.7713e-01
Training F1 Macro: 0.7980 | Validation F1 Macro : 0.7745
Training F1 Micro: 0.8009 | Validation F1 Micro : 0.7760
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5775e-02 | Validation Loss : 3.5375e-02
Training CC : 0.8756 | Validation CC : 0.8737
** Classification Losses **
Training Loss : 5.2118e-01 | Validation Loss : 6.3939e-01
Training F1 Macro: 0.7738 | Validation F1 Macro : 0.7187
Training F1 Micro: 0.7645 | Validation F1 Micro : 0.7140
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4111e-02 | Validation Loss : 3.4497e-02
Training CC : 0.8804 | Validation CC : 0.8771
** Classification Losses **
Training Loss : 4.6413e-01 | Validation Loss : 6.1164e-01
Training F1 Macro: 0.7902 | Validation F1 Macro : 0.7518
Training F1 Micro: 0.7905 | Validation F1 Micro : 0.7580
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3665e-02 | Validation Loss : 3.4593e-02
Training CC : 0.8820 | Validation CC : 0.8767
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8471e-01 | Validation Loss : 5.9280e-01
Training F1 Macro: 0.7814 | Validation F1 Macro : 0.7406
Training F1 Micro: 0.7816 | Validation F1 Micro : 0.7460
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3812e-02 | Validation Loss : 3.4848e-02
Training CC : 0.8815 | Validation CC : 0.8757
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.6772e-01 | Validation Loss : 5.8430e-01
Training F1 Macro: 0.7472 | Validation F1 Macro : 0.7385
Training F1 Micro: 0.7548 | Validation F1 Micro : 0.7380
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4300e-02 | Validation Loss : 3.5231e-02
Training CC : 0.8801 | Validation CC : 0.8743
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.6173e-01 | Validation Loss : 5.9315e-01
Training F1 Macro: 0.7749 | Validation F1 Macro : 0.7516
Training F1 Micro: 0.7728 | Validation F1 Micro : 0.7540
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4432e-02 | Validation Loss : 3.5666e-02
Training CC : 0.8791 | Validation CC : 0.8727
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5071e-01 | Validation Loss : 5.7186e-01
Training F1 Macro: 0.7450 | Validation F1 Macro : 0.7857
Training F1 Micro: 0.7578 | Validation F1 Micro : 0.7840
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4861e-02 | Validation Loss : 3.6007e-02
Training CC : 0.8776 | Validation CC : 0.8714
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4317e-01 | Validation Loss : 6.0420e-01
Training F1 Macro: 0.7874 | Validation F1 Macro : 0.7462
Training F1 Micro: 0.7856 | Validation F1 Micro : 0.7440
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4174e-02 | Validation Loss : 3.4640e-02
Training CC : 0.8802 | Validation CC : 0.8769
** Classification Losses **
Training Loss : 3.8424e-01 | Validation Loss : 6.1632e-01
Training F1 Macro: 0.8101 | Validation F1 Macro : 0.7365
Training F1 Micro: 0.8106 | Validation F1 Micro : 0.7320
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3192e-02 | Validation Loss : 3.3463e-02
Training CC : 0.8840 | Validation CC : 0.8811
** Classification Losses **
Training Loss : 3.9641e-01 | Validation Loss : 6.1010e-01
Training F1 Macro: 0.8010 | Validation F1 Macro : 0.7225
Training F1 Micro: 0.8004 | Validation F1 Micro : 0.7240
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2186e-02 | Validation Loss : 3.2691e-02
Training CC : 0.8874 | Validation CC : 0.8842
** Classification Losses **
Training Loss : 4.2016e-01 | Validation Loss : 5.8771e-01
Training F1 Macro: 0.7606 | Validation F1 Macro : 0.7268
Training F1 Micro: 0.7619 | Validation F1 Micro : 0.7280
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1832e-02 | Validation Loss : 3.1999e-02
Training CC : 0.8896 | Validation CC : 0.8867
** Classification Losses **
Training Loss : 3.5950e-01 | Validation Loss : 6.0531e-01
Training F1 Macro: 0.8260 | Validation F1 Macro : 0.7431
Training F1 Micro: 0.8337 | Validation F1 Micro : 0.7440
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0798e-02 | Validation Loss : 3.1358e-02
Training CC : 0.8927 | Validation CC : 0.8891
** Classification Losses **
Training Loss : 4.0665e-01 | Validation Loss : 5.9983e-01
Training F1 Macro: 0.7770 | Validation F1 Macro : 0.7417
Training F1 Micro: 0.7730 | Validation F1 Micro : 0.7420
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0239e-02 | Validation Loss : 3.1431e-02
Training CC : 0.8944 | Validation CC : 0.8888
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.6398e-01 | Validation Loss : 6.4087e-01
Training F1 Macro: 0.7610 | Validation F1 Macro : 0.7223
Training F1 Micro: 0.7600 | Validation F1 Micro : 0.7200
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0901e-02 | Validation Loss : 3.1572e-02
Training CC : 0.8932 | Validation CC : 0.8883
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0732e-01 | Validation Loss : 6.6176e-01
Training F1 Macro: 0.7656 | Validation F1 Macro : 0.6959
Training F1 Micro: 0.7941 | Validation F1 Micro : 0.6960
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1172e-02 | Validation Loss : 3.1785e-02
Training CC : 0.8925 | Validation CC : 0.8874
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0241e-01 | Validation Loss : 5.8185e-01
Training F1 Macro: 0.7866 | Validation F1 Macro : 0.7662
Training F1 Micro: 0.7794 | Validation F1 Micro : 0.7680
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0659e-02 | Validation Loss : 3.2018e-02
Training CC : 0.8929 | Validation CC : 0.8866
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2110e-01 | Validation Loss : 5.8669e-01
Training F1 Macro: 0.8239 | Validation F1 Macro : 0.7396
Training F1 Micro: 0.8176 | Validation F1 Micro : 0.7400
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1000e-02 | Validation Loss : 3.2224e-02
Training CC : 0.8919 | Validation CC : 0.8858
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.1106e-01 | Validation Loss : 5.7356e-01
Training F1 Macro: 0.7545 | Validation F1 Macro : 0.7514
Training F1 Micro: 0.7545 | Validation F1 Micro : 0.7480
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0744e-02 | Validation Loss : 3.1350e-02
Training CC : 0.8929 | Validation CC : 0.8891
** Classification Losses **
Training Loss : 3.4963e-01 | Validation Loss : 6.0323e-01
Training F1 Macro: 0.8320 | Validation F1 Macro : 0.7503
Training F1 Micro: 0.8275 | Validation F1 Micro : 0.7480
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9909e-02 | Validation Loss : 3.0665e-02
Training CC : 0.8958 | Validation CC : 0.8917
** Classification Losses **
Training Loss : 4.1400e-01 | Validation Loss : 5.7290e-01
Training F1 Macro: 0.7948 | Validation F1 Macro : 0.7698
Training F1 Micro: 0.8053 | Validation F1 Micro : 0.7720
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9344e-02 | Validation Loss : 3.0112e-02
Training CC : 0.8981 | Validation CC : 0.8939
** Classification Losses **
Training Loss : 4.2138e-01 | Validation Loss : 5.9241e-01
Training F1 Macro: 0.7544 | Validation F1 Macro : 0.7349
Training F1 Micro: 0.7553 | Validation F1 Micro : 0.7360
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8733e-02 | Validation Loss : 2.9596e-02
Training CC : 0.9002 | Validation CC : 0.8959
** Classification Losses **
Training Loss : 4.2466e-01 | Validation Loss : 5.5691e-01
Training F1 Macro: 0.7652 | Validation F1 Macro : 0.7548
Training F1 Micro: 0.7662 | Validation F1 Micro : 0.7560
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8327e-02 | Validation Loss : 2.9097e-02
Training CC : 0.9020 | Validation CC : 0.8976
** Classification Losses **
Training Loss : 4.0730e-01 | Validation Loss : 6.1402e-01
Training F1 Macro: 0.7754 | Validation F1 Macro : 0.7132
Training F1 Micro: 0.7886 | Validation F1 Micro : 0.7180
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8169e-02 | Validation Loss : 2.9160e-02
Training CC : 0.9027 | Validation CC : 0.8973
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6505e-01 | Validation Loss : 5.9957e-01
Training F1 Macro: 0.8216 | Validation F1 Macro : 0.7341
Training F1 Micro: 0.8180 | Validation F1 Micro : 0.7340
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7954e-02 | Validation Loss : 2.9265e-02
Training CC : 0.9030 | Validation CC : 0.8969
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9511e-01 | Validation Loss : 5.9053e-01
Training F1 Macro: 0.8117 | Validation F1 Macro : 0.7489
Training F1 Micro: 0.8121 | Validation F1 Micro : 0.7460
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8648e-02 | Validation Loss : 2.9408e-02
Training CC : 0.9016 | Validation CC : 0.8964
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7133e-01 | Validation Loss : 6.0220e-01
Training F1 Macro: 0.7455 | Validation F1 Macro : 0.7342
Training F1 Micro: 0.7518 | Validation F1 Micro : 0.7300
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8246e-02 | Validation Loss : 2.9565e-02
Training CC : 0.9021 | Validation CC : 0.8958
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3227e-01 | Validation Loss : 6.1122e-01
Training F1 Macro: 0.8061 | Validation F1 Macro : 0.7055
Training F1 Micro: 0.8167 | Validation F1 Micro : 0.7100
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8678e-02 | Validation Loss : 2.9716e-02
Training CC : 0.9011 | Validation CC : 0.8953
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7788e-01 | Validation Loss : 5.8467e-01
Training F1 Macro: 0.7431 | Validation F1 Macro : 0.7589
Training F1 Micro: 0.7393 | Validation F1 Micro : 0.7620
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8164e-02 | Validation Loss : 2.9146e-02
Training CC : 0.9024 | Validation CC : 0.8977
** Classification Losses **
Training Loss : 4.3632e-01 | Validation Loss : 5.9553e-01
Training F1 Macro: 0.7653 | Validation F1 Macro : 0.7267
Training F1 Micro: 0.7708 | Validation F1 Micro : 0.7240
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7706e-02 | Validation Loss : 2.8590e-02
Training CC : 0.9042 | Validation CC : 0.8995
** Classification Losses **
Training Loss : 4.0651e-01 | Validation Loss : 6.0184e-01
Training F1 Macro: 0.7642 | Validation F1 Macro : 0.7426
Training F1 Micro: 0.7718 | Validation F1 Micro : 0.7460
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7251e-02 | Validation Loss : 2.8241e-02
Training CC : 0.9059 | Validation CC : 0.9010
** Classification Losses **
Training Loss : 4.2742e-01 | Validation Loss : 6.2016e-01
Training F1 Macro: 0.7951 | Validation F1 Macro : 0.7085
Training F1 Micro: 0.7880 | Validation F1 Micro : 0.7120
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6784e-02 | Validation Loss : 2.7819e-02
Training CC : 0.9074 | Validation CC : 0.9023
** Classification Losses **
Training Loss : 4.0396e-01 | Validation Loss : 6.0088e-01
Training F1 Macro: 0.7764 | Validation F1 Macro : 0.7525
Training F1 Micro: 0.7702 | Validation F1 Micro : 0.7540
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6409e-02 | Validation Loss : 2.7631e-02
Training CC : 0.9087 | Validation CC : 0.9034
** Classification Losses **
Training Loss : 3.9296e-01 | Validation Loss : 5.9762e-01
Training F1 Macro: 0.7838 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.7993 | Validation F1 Micro : 0.7280
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6153e-02 | Validation Loss : 2.7668e-02
Training CC : 0.9095 | Validation CC : 0.9033
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4560e-01 | Validation Loss : 5.7894e-01
Training F1 Macro: 0.7923 | Validation F1 Macro : 0.7395
Training F1 Micro: 0.7803 | Validation F1 Micro : 0.7400
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6633e-02 | Validation Loss : 2.7746e-02
Training CC : 0.9086 | Validation CC : 0.9029
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1416e-01 | Validation Loss : 6.3165e-01
Training F1 Macro: 0.7819 | Validation F1 Macro : 0.6995
Training F1 Micro: 0.7689 | Validation F1 Micro : 0.7040
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6475e-02 | Validation Loss : 2.7830e-02
Training CC : 0.9088 | Validation CC : 0.9026
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5037e-01 | Validation Loss : 6.2827e-01
Training F1 Macro: 0.7756 | Validation F1 Macro : 0.7362
Training F1 Micro: 0.7686 | Validation F1 Micro : 0.7300
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6632e-02 | Validation Loss : 2.7919e-02
Training CC : 0.9083 | Validation CC : 0.9022
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2798e-01 | Validation Loss : 5.6208e-01
Training F1 Macro: 0.7774 | Validation F1 Macro : 0.7634
Training F1 Micro: 0.7670 | Validation F1 Micro : 0.7640
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6466e-02 | Validation Loss : 2.7994e-02
Training CC : 0.9084 | Validation CC : 0.9019
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5428e-01 | Validation Loss : 6.1699e-01
Training F1 Macro: 0.7458 | Validation F1 Macro : 0.7400
Training F1 Micro: 0.7373 | Validation F1 Micro : 0.7360
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6652e-02 | Validation Loss : 2.7525e-02
Training CC : 0.9085 | Validation CC : 0.9037
** Classification Losses **
Training Loss : 4.1181e-01 | Validation Loss : 6.6353e-01
Training F1 Macro: 0.7910 | Validation F1 Macro : 0.7021
Training F1 Micro: 0.7845 | Validation F1 Micro : 0.7040
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5811e-02 | Validation Loss : 2.7168e-02
Training CC : 0.9106 | Validation CC : 0.9048
** Classification Losses **
Training Loss : 4.6287e-01 | Validation Loss : 6.4651e-01
Training F1 Macro: 0.7531 | Validation F1 Macro : 0.7147
Training F1 Micro: 0.7407 | Validation F1 Micro : 0.7160
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5650e-02 | Validation Loss : 2.6851e-02
Training CC : 0.9117 | Validation CC : 0.9060
** Classification Losses **
Training Loss : 3.5683e-01 | Validation Loss : 6.2130e-01
Training F1 Macro: 0.8147 | Validation F1 Macro : 0.7374
Training F1 Micro: 0.8184 | Validation F1 Micro : 0.7380
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5717e-02 | Validation Loss : 2.7017e-02
Training CC : 0.9122 | Validation CC : 0.9062
** Classification Losses **
Training Loss : 4.7267e-01 | Validation Loss : 6.3952e-01
Training F1 Macro: 0.7621 | Validation F1 Macro : 0.7143
Training F1 Micro: 0.7478 | Validation F1 Micro : 0.7160
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6007e-02 | Validation Loss : 2.6569e-02
Training CC : 0.9121 | Validation CC : 0.9070
** Classification Losses **
Training Loss : 3.8878e-01 | Validation Loss : 6.1625e-01
Training F1 Macro: 0.8227 | Validation F1 Macro : 0.7083
Training F1 Micro: 0.8116 | Validation F1 Micro : 0.7080
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5016e-02 | Validation Loss : 2.6628e-02
Training CC : 0.9137 | Validation CC : 0.9067
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3823e-01 | Validation Loss : 6.2212e-01
Training F1 Macro: 0.7735 | Validation F1 Macro : 0.7161
Training F1 Micro: 0.7693 | Validation F1 Micro : 0.7200
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5464e-02 | Validation Loss : 2.6716e-02
Training CC : 0.9129 | Validation CC : 0.9064
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0758e-01 | Validation Loss : 6.3108e-01
Training F1 Macro: 0.7975 | Validation F1 Macro : 0.7044
Training F1 Micro: 0.7995 | Validation F1 Micro : 0.7100
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5343e-02 | Validation Loss : 2.6825e-02
Training CC : 0.9130 | Validation CC : 0.9060
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6403e-01 | Validation Loss : 6.2142e-01
Training F1 Macro: 0.8214 | Validation F1 Macro : 0.7162
Training F1 Micro: 0.8238 | Validation F1 Micro : 0.7180
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5372e-02 | Validation Loss : 2.6953e-02
Training CC : 0.9128 | Validation CC : 0.9056
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0257e-01 | Validation Loss : 6.0927e-01
Training F1 Macro: 0.7738 | Validation F1 Macro : 0.7383
Training F1 Micro: 0.7758 | Validation F1 Micro : 0.7380
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5558e-02 | Validation Loss : 2.7066e-02
Training CC : 0.9123 | Validation CC : 0.9052
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2582e-01 | Validation Loss : 5.9285e-01
Training F1 Macro: 0.8369 | Validation F1 Macro : 0.7342
Training F1 Micro: 0.8320 | Validation F1 Micro : 0.7360
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5350e-02 | Validation Loss : 2.6629e-02
Training CC : 0.9130 | Validation CC : 0.9071
** Classification Losses **
Training Loss : 3.8014e-01 | Validation Loss : 6.3965e-01
Training F1 Macro: 0.7853 | Validation F1 Macro : 0.7028
Training F1 Micro: 0.7886 | Validation F1 Micro : 0.7080
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4866e-02 | Validation Loss : 2.6243e-02
Training CC : 0.9145 | Validation CC : 0.9082
** Classification Losses **
Training Loss : 3.2400e-01 | Validation Loss : 5.8670e-01
Training F1 Macro: 0.8534 | Validation F1 Macro : 0.7298
Training F1 Micro: 0.8466 | Validation F1 Micro : 0.7340
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4907e-02 | Validation Loss : 2.6037e-02
Training CC : 0.9151 | Validation CC : 0.9090
** Classification Losses **
Training Loss : 2.6684e-01 | Validation Loss : 6.1345e-01
Training F1 Macro: 0.8703 | Validation F1 Macro : 0.7191
Training F1 Micro: 0.8653 | Validation F1 Micro : 0.7220
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4500e-02 | Validation Loss : 2.5760e-02
Training CC : 0.9161 | Validation CC : 0.9100
** Classification Losses **
Training Loss : 4.5668e-01 | Validation Loss : 6.1409e-01
Training F1 Macro: 0.7715 | Validation F1 Macro : 0.7072
Training F1 Micro: 0.7539 | Validation F1 Micro : 0.7140
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3951e-02 | Validation Loss : 2.5633e-02
Training CC : 0.9175 | Validation CC : 0.9106
** Classification Losses **
Training Loss : 3.2200e-01 | Validation Loss : 6.8878e-01
Training F1 Macro: 0.8374 | Validation F1 Macro : 0.6687
Training F1 Micro: 0.8266 | Validation F1 Micro : 0.6720
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3889e-02 | Validation Loss : 2.5692e-02
Training CC : 0.9179 | Validation CC : 0.9103
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1081e-01 | Validation Loss : 6.1263e-01
Training F1 Macro: 0.8066 | Validation F1 Macro : 0.7055
Training F1 Micro: 0.7967 | Validation F1 Micro : 0.7120
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4225e-02 | Validation Loss : 2.5781e-02
Training CC : 0.9172 | Validation CC : 0.9100
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7208e-01 | Validation Loss : 6.4370e-01
Training F1 Macro: 0.7282 | Validation F1 Macro : 0.6890
Training F1 Micro: 0.7298 | Validation F1 Micro : 0.6900
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4172e-02 | Validation Loss : 2.5860e-02
Training CC : 0.9171 | Validation CC : 0.9097
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6372e-01 | Validation Loss : 6.2282e-01
Training F1 Macro: 0.8466 | Validation F1 Macro : 0.7381
Training F1 Micro: 0.8357 | Validation F1 Micro : 0.7440
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4182e-02 | Validation Loss : 2.5949e-02
Training CC : 0.9170 | Validation CC : 0.9093
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1720e-01 | Validation Loss : 6.1148e-01
Training F1 Macro: 0.7864 | Validation F1 Macro : 0.7346
Training F1 Micro: 0.7747 | Validation F1 Micro : 0.7360
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4141e-02 | Validation Loss : 2.6037e-02
Training CC : 0.9169 | Validation CC : 0.9090
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4981e-01 | Validation Loss : 6.7430e-01
Training F1 Macro: 0.8416 | Validation F1 Macro : 0.6761
Training F1 Micro: 0.8426 | Validation F1 Micro : 0.6800
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4070e-02 | Validation Loss : 2.5767e-02
Training CC : 0.9173 | Validation CC : 0.9102
** Classification Losses **
Training Loss : 3.7768e-01 | Validation Loss : 6.2545e-01
Training F1 Macro: 0.7891 | Validation F1 Macro : 0.7141
Training F1 Micro: 0.8005 | Validation F1 Micro : 0.7180
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3736e-02 | Validation Loss : 2.5507e-02
Training CC : 0.9183 | Validation CC : 0.9109
** Classification Losses **
Training Loss : 3.6801e-01 | Validation Loss : 6.1214e-01
Training F1 Macro: 0.8395 | Validation F1 Macro : 0.7111
Training F1 Micro: 0.8319 | Validation F1 Micro : 0.7120
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3780e-02 | Validation Loss : 2.5305e-02
Training CC : 0.9188 | Validation CC : 0.9117
** Classification Losses **
Training Loss : 3.3479e-01 | Validation Loss : 6.4236e-01
Training F1 Macro: 0.8277 | Validation F1 Macro : 0.6933
Training F1 Micro: 0.8293 | Validation F1 Micro : 0.6920
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3964e-02 | Validation Loss : 2.5351e-02
Training CC : 0.9189 | Validation CC : 0.9115
** Classification Losses **
Training Loss : 4.3224e-01 | Validation Loss : 6.2975e-01
Training F1 Macro: 0.8095 | Validation F1 Macro : 0.7150
Training F1 Micro: 0.8071 | Validation F1 Micro : 0.7140
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3598e-02 | Validation Loss : 2.5363e-02
Training CC : 0.9198 | Validation CC : 0.9122
** Classification Losses **
Training Loss : 3.2798e-01 | Validation Loss : 6.5739e-01
Training F1 Macro: 0.8283 | Validation F1 Macro : 0.6851
Training F1 Micro: 0.8336 | Validation F1 Micro : 0.6820
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3448e-02 | Validation Loss : 2.5450e-02
Training CC : 0.9200 | Validation CC : 0.9119
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4653e-01 | Validation Loss : 6.2578e-01
Training F1 Macro: 0.7500 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.7614 | Validation F1 Micro : 0.7300
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3373e-02 | Validation Loss : 2.5550e-02
Training CC : 0.9199 | Validation CC : 0.9115
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6389e-01 | Validation Loss : 6.2219e-01
Training F1 Macro: 0.8119 | Validation F1 Macro : 0.7148
Training F1 Micro: 0.8246 | Validation F1 Micro : 0.7200
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3379e-02 | Validation Loss : 2.5649e-02
Training CC : 0.9196 | Validation CC : 0.9111
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3274e-01 | Validation Loss : 6.2542e-01
Training F1 Macro: 0.7607 | Validation F1 Macro : 0.7185
Training F1 Micro: 0.7639 | Validation F1 Micro : 0.7240
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3511e-02 | Validation Loss : 2.5703e-02
Training CC : 0.9193 | Validation CC : 0.9108
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8005e-01 | Validation Loss : 6.2699e-01
Training F1 Macro: 0.7981 | Validation F1 Macro : 0.7167
Training F1 Micro: 0.7969 | Validation F1 Micro : 0.7180
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3601e-02 | Validation Loss : 2.5756e-02
Training CC : 0.9190 | Validation CC : 0.9106
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4316e-01 | Validation Loss : 6.3244e-01
Training F1 Macro: 0.7459 | Validation F1 Macro : 0.7051
Training F1 Micro: 0.7652 | Validation F1 Micro : 0.7060
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3498e-02 | Validation Loss : 2.5206e-02
Training CC : 0.9196 | Validation CC : 0.9120
** Classification Losses **
Training Loss : 4.0786e-01 | Validation Loss : 6.6374e-01
Training F1 Macro: 0.7613 | Validation F1 Macro : 0.6738
Training F1 Micro: 0.7662 | Validation F1 Micro : 0.6860
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3210e-02 | Validation Loss : 2.5088e-02
Training CC : 0.9205 | Validation CC : 0.9125
** Classification Losses **
Training Loss : 4.1581e-01 | Validation Loss : 6.6263e-01
Training F1 Macro: 0.7811 | Validation F1 Macro : 0.7026
Training F1 Micro: 0.7859 | Validation F1 Micro : 0.7040
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3459e-02 | Validation Loss : 2.5012e-02
Training CC : 0.9206 | Validation CC : 0.9130
** Classification Losses **
Training Loss : 4.2421e-01 | Validation Loss : 6.8449e-01
Training F1 Macro: 0.7775 | Validation F1 Macro : 0.6827
Training F1 Micro: 0.7812 | Validation F1 Micro : 0.6860
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3142e-02 | Validation Loss : 2.4955e-02
Training CC : 0.9213 | Validation CC : 0.9130
** Classification Losses **
Training Loss : 4.4301e-01 | Validation Loss : 6.5272e-01
Training F1 Macro: 0.7923 | Validation F1 Macro : 0.6905
Training F1 Micro: 0.7888 | Validation F1 Micro : 0.6960
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2794e-02 | Validation Loss : 2.4861e-02
Training CC : 0.9221 | Validation CC : 0.9135
** Classification Losses **
Training Loss : 4.3326e-01 | Validation Loss : 6.8550e-01
Training F1 Macro: 0.7770 | Validation F1 Macro : 0.6842
Training F1 Micro: 0.7779 | Validation F1 Micro : 0.6820
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2643e-02 | Validation Loss : 2.4907e-02
Training CC : 0.9224 | Validation CC : 0.9133
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0014e-01 | Validation Loss : 6.4366e-01
Training F1 Macro: 0.7710 | Validation F1 Macro : 0.6912
Training F1 Micro: 0.7777 | Validation F1 Micro : 0.7000
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2823e-02 | Validation Loss : 2.4974e-02
Training CC : 0.9220 | Validation CC : 0.9130
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7690e-01 | Validation Loss : 6.2031e-01
Training F1 Macro: 0.7386 | Validation F1 Macro : 0.7339
Training F1 Micro: 0.7540 | Validation F1 Micro : 0.7340
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2775e-02 | Validation Loss : 2.5045e-02
Training CC : 0.9220 | Validation CC : 0.9128
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7163e-01 | Validation Loss : 6.0180e-01
Training F1 Macro: 0.8047 | Validation F1 Macro : 0.7203
Training F1 Micro: 0.8078 | Validation F1 Micro : 0.7220
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2999e-02 | Validation Loss : 2.5124e-02
Training CC : 0.9215 | Validation CC : 0.9125
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7149e-01 | Validation Loss : 6.3827e-01
Training F1 Macro: 0.8185 | Validation F1 Macro : 0.6883
Training F1 Micro: 0.8117 | Validation F1 Micro : 0.6880
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3022e-02 | Validation Loss : 2.5183e-02
Training CC : 0.9213 | Validation CC : 0.9122
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9046e-01 | Validation Loss : 6.1432e-01
Training F1 Macro: 0.7657 | Validation F1 Macro : 0.7297
Training F1 Micro: 0.7748 | Validation F1 Micro : 0.7320
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2890e-02 | Validation Loss : 2.4908e-02
Training CC : 0.9218 | Validation CC : 0.9131
** Classification Losses **
Training Loss : 3.7590e-01 | Validation Loss : 5.8294e-01
Training F1 Macro: 0.8509 | Validation F1 Macro : 0.7255
Training F1 Micro: 0.8438 | Validation F1 Micro : 0.7260
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2609e-02 | Validation Loss : 2.4708e-02
Training CC : 0.9227 | Validation CC : 0.9140
** Classification Losses **
Training Loss : 3.9399e-01 | Validation Loss : 6.0610e-01
Training F1 Macro: 0.8054 | Validation F1 Macro : 0.7147
Training F1 Micro: 0.8011 | Validation F1 Micro : 0.7220
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2299e-02 | Validation Loss : 2.4608e-02
Training CC : 0.9236 | Validation CC : 0.9145
** Classification Losses **
Training Loss : 4.4129e-01 | Validation Loss : 6.4889e-01
Training F1 Macro: 0.7730 | Validation F1 Macro : 0.6933
Training F1 Micro: 0.7729 | Validation F1 Micro : 0.6960
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2598e-02 | Validation Loss : 2.4509e-02
Training CC : 0.9236 | Validation CC : 0.9147
** Classification Losses **
Training Loss : 3.2335e-01 | Validation Loss : 6.6058e-01
Training F1 Macro: 0.8218 | Validation F1 Macro : 0.6901
Training F1 Micro: 0.8165 | Validation F1 Micro : 0.6920
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2169e-02 | Validation Loss : 2.4443e-02
Training CC : 0.9243 | Validation CC : 0.9150
** Classification Losses **
Training Loss : 3.7680e-01 | Validation Loss : 6.5536e-01
Training F1 Macro: 0.8296 | Validation F1 Macro : 0.6967
Training F1 Micro: 0.8307 | Validation F1 Micro : 0.6980
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2161e-02 | Validation Loss : 2.4488e-02
Training CC : 0.9246 | Validation CC : 0.9149
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2447e-01 | Validation Loss : 6.1763e-01
Training F1 Macro: 0.7757 | Validation F1 Macro : 0.7052
Training F1 Micro: 0.7794 | Validation F1 Micro : 0.7080
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1924e-02 | Validation Loss : 2.4563e-02
Training CC : 0.9249 | Validation CC : 0.9146
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5354e-01 | Validation Loss : 6.4070e-01
Training F1 Macro: 0.7298 | Validation F1 Macro : 0.7110
Training F1 Micro: 0.7276 | Validation F1 Micro : 0.7160
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1999e-02 | Validation Loss : 2.4658e-02
Training CC : 0.9246 | Validation CC : 0.9142
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6262e-01 | Validation Loss : 6.5416e-01
Training F1 Macro: 0.8113 | Validation F1 Macro : 0.7059
Training F1 Micro: 0.8004 | Validation F1 Micro : 0.7120
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1984e-02 | Validation Loss : 2.4769e-02
Training CC : 0.9244 | Validation CC : 0.9138
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7742e-01 | Validation Loss : 6.9879e-01
Training F1 Macro: 0.8009 | Validation F1 Macro : 0.6706
Training F1 Micro: 0.8055 | Validation F1 Micro : 0.6700
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2753e-02 | Validation Loss : 2.4884e-02
Training CC : 0.9230 | Validation CC : 0.9134
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5456e-01 | Validation Loss : 6.7225e-01
Training F1 Macro: 0.8287 | Validation F1 Macro : 0.6901
Training F1 Micro: 0.8272 | Validation F1 Micro : 0.6880
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2172e-02 | Validation Loss : 2.4586e-02
Training CC : 0.9241 | Validation CC : 0.9143
** Classification Losses **
Training Loss : 3.4565e-01 | Validation Loss : 6.4624e-01
Training F1 Macro: 0.8127 | Validation F1 Macro : 0.7174
Training F1 Micro: 0.8075 | Validation F1 Micro : 0.7200
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1958e-02 | Validation Loss : 2.4498e-02
Training CC : 0.9249 | Validation CC : 0.9146
** Classification Losses **
Training Loss : 3.8851e-01 | Validation Loss : 6.5626e-01
Training F1 Macro: 0.8007 | Validation F1 Macro : 0.7077
Training F1 Micro: 0.8019 | Validation F1 Micro : 0.7120
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1701e-02 | Validation Loss : 2.4371e-02
Training CC : 0.9256 | Validation CC : 0.9152
** Classification Losses **
Training Loss : 3.9178e-01 | Validation Loss : 6.6507e-01
Training F1 Macro: 0.8186 | Validation F1 Macro : 0.7108
Training F1 Micro: 0.8150 | Validation F1 Micro : 0.7120
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1850e-02 | Validation Loss : 2.4370e-02
Training CC : 0.9255 | Validation CC : 0.9152
** Classification Losses **
Training Loss : 4.0946e-01 | Validation Loss : 6.4538e-01
Training F1 Macro: 0.7990 | Validation F1 Macro : 0.7278
Training F1 Micro: 0.7992 | Validation F1 Micro : 0.7260
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1646e-02 | Validation Loss : 2.4268e-02
Training CC : 0.9260 | Validation CC : 0.9154
** Classification Losses **
Training Loss : 4.0429e-01 | Validation Loss : 6.6556e-01
Training F1 Macro: 0.7826 | Validation F1 Macro : 0.6930
Training F1 Micro: 0.7848 | Validation F1 Micro : 0.6920
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1518e-02 | Validation Loss : 2.4314e-02
Training CC : 0.9265 | Validation CC : 0.9152
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2526e-01 | Validation Loss : 5.8524e-01
Training F1 Macro: 0.8107 | Validation F1 Macro : 0.7293
Training F1 Micro: 0.8049 | Validation F1 Micro : 0.7320
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1626e-02 | Validation Loss : 2.4409e-02
Training CC : 0.9262 | Validation CC : 0.9149
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1281e-01 | Validation Loss : 6.7137e-01
Training F1 Macro: 0.8212 | Validation F1 Macro : 0.6721
Training F1 Micro: 0.8226 | Validation F1 Micro : 0.6740
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1502e-02 | Validation Loss : 2.4501e-02
Training CC : 0.9262 | Validation CC : 0.9145
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1100e-01 | Validation Loss : 6.9133e-01
Training F1 Macro: 0.7770 | Validation F1 Macro : 0.6803
Training F1 Micro: 0.7794 | Validation F1 Micro : 0.6860
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1715e-02 | Validation Loss : 2.4596e-02
Training CC : 0.9257 | Validation CC : 0.9142
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0344e-01 | Validation Loss : 6.4065e-01
Training F1 Macro: 0.7877 | Validation F1 Macro : 0.7075
Training F1 Micro: 0.7882 | Validation F1 Micro : 0.7120
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2341e-02 | Validation Loss : 2.4697e-02
Training CC : 0.9244 | Validation CC : 0.9138
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2183e-01 | Validation Loss : 7.0902e-01
Training F1 Macro: 0.7619 | Validation F1 Macro : 0.6701
Training F1 Micro: 0.7521 | Validation F1 Micro : 0.6700
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2492e-02 | Validation Loss : 2.4421e-02
Training CC : 0.9243 | Validation CC : 0.9151
** Classification Losses **
Training Loss : 3.4555e-01 | Validation Loss : 6.4658e-01
Training F1 Macro: 0.8362 | Validation F1 Macro : 0.6925
Training F1 Micro: 0.8309 | Validation F1 Micro : 0.6920
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1581e-02 | Validation Loss : 2.4403e-02
Training CC : 0.9263 | Validation CC : 0.9151
** Classification Losses **
Training Loss : 3.8282e-01 | Validation Loss : 6.5475e-01
Training F1 Macro: 0.7844 | Validation F1 Macro : 0.6806
Training F1 Micro: 0.7840 | Validation F1 Micro : 0.6820
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1475e-02 | Validation Loss : 2.4362e-02
Training CC : 0.9267 | Validation CC : 0.9155
** Classification Losses **
Training Loss : 4.9652e-01 | Validation Loss : 6.4670e-01
Training F1 Macro: 0.7506 | Validation F1 Macro : 0.7009
Training F1 Micro: 0.7453 | Validation F1 Micro : 0.6980
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1327e-02 | Validation Loss : 2.4165e-02
Training CC : 0.9272 | Validation CC : 0.9159
** Classification Losses **
Training Loss : 3.9819e-01 | Validation Loss : 6.8061e-01
Training F1 Macro: 0.8224 | Validation F1 Macro : 0.6687
Training F1 Micro: 0.8229 | Validation F1 Micro : 0.6700
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1515e-02 | Validation Loss : 2.4174e-02
Training CC : 0.9271 | Validation CC : 0.9157
** Classification Losses **
Training Loss : 3.7555e-01 | Validation Loss : 6.8509e-01
Training F1 Macro: 0.7881 | Validation F1 Macro : 0.6746
Training F1 Micro: 0.7823 | Validation F1 Micro : 0.6740
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.7965e-01 | Validation Loss : 1.1363e-01
Training CC : 0.4363 | Validation CC : 0.6745
** Classification Losses **
Training Loss : 1.4386e+00 | Validation Loss : 1.3973e+00
Training F1 Macro: 0.2192 | Validation F1 Macro : 0.2080
Training F1 Micro: 0.2644 | Validation F1 Micro : 0.2740
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.3265e-02 | Validation Loss : 6.9356e-02
Training CC : 0.7337 | Validation CC : 0.7797
** Classification Losses **
Training Loss : 1.4639e+00 | Validation Loss : 1.4196e+00
Training F1 Macro: 0.1890 | Validation F1 Macro : 0.1770
Training F1 Micro: 0.2401 | Validation F1 Micro : 0.2440
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.0903e-02 | Validation Loss : 5.2399e-02
Training CC : 0.8010 | Validation CC : 0.8165
** Classification Losses **
Training Loss : 1.4696e+00 | Validation Loss : 1.4137e+00
Training F1 Macro: 0.1503 | Validation F1 Macro : 0.1864
Training F1 Micro: 0.1901 | Validation F1 Micro : 0.2560
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.8583e-02 | Validation Loss : 4.6616e-02
Training CC : 0.8314 | Validation CC : 0.8352
** Classification Losses **
Training Loss : 1.4746e+00 | Validation Loss : 1.4064e+00
Training F1 Macro: 0.1385 | Validation F1 Macro : 0.1639
Training F1 Micro: 0.1954 | Validation F1 Micro : 0.2200
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.4609e-02 | Validation Loss : 4.3745e-02
Training CC : 0.8444 | Validation CC : 0.8458
** Classification Losses **
Training Loss : 1.4440e+00 | Validation Loss : 1.4361e+00
Training F1 Macro: 0.1324 | Validation F1 Macro : 0.1614
Training F1 Micro: 0.1916 | Validation F1 Micro : 0.2200
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.3786e-02 | Validation Loss : 4.7973e-02
Training CC : 0.8471 | Validation CC : 0.8309
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.2519e+00 | Validation Loss : 1.0993e+00
Training F1 Macro: 0.4057 | Validation F1 Macro : 0.5831
Training F1 Micro: 0.4160 | Validation F1 Micro : 0.5820
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.9949e-02 | Validation Loss : 5.4076e-02
Training CC : 0.8246 | Validation CC : 0.8080
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.2865e-01 | Validation Loss : 9.4067e-01
Training F1 Macro: 0.7103 | Validation F1 Macro : 0.6689
Training F1 Micro: 0.7154 | Validation F1 Micro : 0.6660
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.5199e-02 | Validation Loss : 5.6907e-02
Training CC : 0.8056 | Validation CC : 0.7971
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.9320e-01 | Validation Loss : 8.3732e-01
Training F1 Macro: 0.7225 | Validation F1 Macro : 0.7118
Training F1 Micro: 0.7284 | Validation F1 Micro : 0.7000
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.7202e-02 | Validation Loss : 5.8264e-02
Training CC : 0.7976 | Validation CC : 0.7921
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.5507e-01 | Validation Loss : 7.4531e-01
Training F1 Macro: 0.8012 | Validation F1 Macro : 0.7281
Training F1 Micro: 0.7893 | Validation F1 Micro : 0.7200
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 5.8981e-02 | Validation Loss : 5.9427e-02
Training CC : 0.7919 | Validation CC : 0.7874
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.3305e-01 | Validation Loss : 6.8489e-01
Training F1 Macro: 0.8232 | Validation F1 Macro : 0.7380
Training F1 Micro: 0.8174 | Validation F1 Micro : 0.7280
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.0018e-02 | Validation Loss : 4.5314e-02
Training CC : 0.8234 | Validation CC : 0.8372
** Classification Losses **
Training Loss : 4.6845e-01 | Validation Loss : 7.1466e-01
Training F1 Macro: 0.8380 | Validation F1 Macro : 0.7383
Training F1 Micro: 0.8306 | Validation F1 Micro : 0.7320
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.2784e-02 | Validation Loss : 4.0184e-02
Training CC : 0.8484 | Validation CC : 0.8551
** Classification Losses **
Training Loss : 5.3703e-01 | Validation Loss : 7.3302e-01
Training F1 Macro: 0.7923 | Validation F1 Macro : 0.7331
Training F1 Micro: 0.7970 | Validation F1 Micro : 0.7300
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9065e-02 | Validation Loss : 3.8580e-02
Training CC : 0.8621 | Validation CC : 0.8613
** Classification Losses **
Training Loss : 6.1098e-01 | Validation Loss : 7.5393e-01
Training F1 Macro: 0.7694 | Validation F1 Macro : 0.7112
Training F1 Micro: 0.7653 | Validation F1 Micro : 0.7060
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6981e-02 | Validation Loss : 3.6856e-02
Training CC : 0.8692 | Validation CC : 0.8680
** Classification Losses **
Training Loss : 5.1134e-01 | Validation Loss : 7.6640e-01
Training F1 Macro: 0.8573 | Validation F1 Macro : 0.6951
Training F1 Micro: 0.8581 | Validation F1 Micro : 0.6880
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5591e-02 | Validation Loss : 3.5720e-02
Training CC : 0.8749 | Validation CC : 0.8725
** Classification Losses **
Training Loss : 5.1509e-01 | Validation Loss : 7.3473e-01
Training F1 Macro: 0.8549 | Validation F1 Macro : 0.7216
Training F1 Micro: 0.8480 | Validation F1 Micro : 0.7160
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4625e-02 | Validation Loss : 3.5673e-02
Training CC : 0.8781 | Validation CC : 0.8726
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.4331e-01 | Validation Loss : 6.6547e-01
Training F1 Macro: 0.8309 | Validation F1 Macro : 0.7449
Training F1 Micro: 0.8238 | Validation F1 Micro : 0.7380
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4832e-02 | Validation Loss : 3.6126e-02
Training CC : 0.8773 | Validation CC : 0.8708
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.0105e-01 | Validation Loss : 6.3480e-01
Training F1 Macro: 0.7943 | Validation F1 Macro : 0.7551
Training F1 Micro: 0.7912 | Validation F1 Micro : 0.7440
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5499e-02 | Validation Loss : 3.6848e-02
Training CC : 0.8748 | Validation CC : 0.8680
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3304e-01 | Validation Loss : 6.1625e-01
Training F1 Macro: 0.8270 | Validation F1 Macro : 0.7297
Training F1 Micro: 0.8201 | Validation F1 Micro : 0.7260
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6323e-02 | Validation Loss : 3.7532e-02
Training CC : 0.8717 | Validation CC : 0.8654
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1733e-01 | Validation Loss : 5.8785e-01
Training F1 Macro: 0.8094 | Validation F1 Macro : 0.7533
Training F1 Micro: 0.8135 | Validation F1 Micro : 0.7520
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.7097e-02 | Validation Loss : 3.8087e-02
Training CC : 0.8691 | Validation CC : 0.8632
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7813e-01 | Validation Loss : 5.6507e-01
Training F1 Macro: 0.8249 | Validation F1 Macro : 0.7611
Training F1 Micro: 0.8187 | Validation F1 Micro : 0.7540
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6203e-02 | Validation Loss : 3.5683e-02
Training CC : 0.8726 | Validation CC : 0.8730
** Classification Losses **
Training Loss : 4.5024e-01 | Validation Loss : 5.3168e-01
Training F1 Macro: 0.7992 | Validation F1 Macro : 0.7858
Training F1 Micro: 0.7954 | Validation F1 Micro : 0.7820
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4421e-02 | Validation Loss : 3.4381e-02
Training CC : 0.8793 | Validation CC : 0.8776
** Classification Losses **
Training Loss : 4.5795e-01 | Validation Loss : 5.9211e-01
Training F1 Macro: 0.7866 | Validation F1 Macro : 0.7589
Training F1 Micro: 0.8097 | Validation F1 Micro : 0.7520
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3166e-02 | Validation Loss : 3.3390e-02
Training CC : 0.8841 | Validation CC : 0.8813
** Classification Losses **
Training Loss : 3.7705e-01 | Validation Loss : 6.0948e-01
Training F1 Macro: 0.8254 | Validation F1 Macro : 0.7520
Training F1 Micro: 0.8246 | Validation F1 Micro : 0.7440
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2247e-02 | Validation Loss : 3.2530e-02
Training CC : 0.8874 | Validation CC : 0.8846
** Classification Losses **
Training Loss : 4.1748e-01 | Validation Loss : 5.6087e-01
Training F1 Macro: 0.8523 | Validation F1 Macro : 0.7770
Training F1 Micro: 0.8456 | Validation F1 Micro : 0.7760
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1531e-02 | Validation Loss : 3.1812e-02
Training CC : 0.8903 | Validation CC : 0.8873
** Classification Losses **
Training Loss : 4.2806e-01 | Validation Loss : 5.8793e-01
Training F1 Macro: 0.8196 | Validation F1 Macro : 0.7545
Training F1 Micro: 0.8094 | Validation F1 Micro : 0.7500
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1277e-02 | Validation Loss : 3.1953e-02
Training CC : 0.8916 | Validation CC : 0.8868
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4535e-01 | Validation Loss : 5.5254e-01
Training F1 Macro: 0.7811 | Validation F1 Macro : 0.7603
Training F1 Micro: 0.7851 | Validation F1 Micro : 0.7560
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1339e-02 | Validation Loss : 3.2282e-02
Training CC : 0.8912 | Validation CC : 0.8855
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2788e-01 | Validation Loss : 5.5282e-01
Training F1 Macro: 0.7975 | Validation F1 Macro : 0.7718
Training F1 Micro: 0.7965 | Validation F1 Micro : 0.7620
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1392e-02 | Validation Loss : 3.2708e-02
Training CC : 0.8902 | Validation CC : 0.8840
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0559e-01 | Validation Loss : 5.2389e-01
Training F1 Macro: 0.8347 | Validation F1 Macro : 0.7849
Training F1 Micro: 0.8217 | Validation F1 Micro : 0.7800
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2003e-02 | Validation Loss : 3.3006e-02
Training CC : 0.8884 | Validation CC : 0.8828
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7851e-01 | Validation Loss : 5.5884e-01
Training F1 Macro: 0.8358 | Validation F1 Macro : 0.7388
Training F1 Micro: 0.8362 | Validation F1 Micro : 0.7280
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2112e-02 | Validation Loss : 3.3201e-02
Training CC : 0.8878 | Validation CC : 0.8821
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2582e-01 | Validation Loss : 5.4442e-01
Training F1 Macro: 0.7955 | Validation F1 Macro : 0.7570
Training F1 Micro: 0.8006 | Validation F1 Micro : 0.7520
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1693e-02 | Validation Loss : 3.1962e-02
Training CC : 0.8897 | Validation CC : 0.8867
** Classification Losses **
Training Loss : 3.9836e-01 | Validation Loss : 5.6180e-01
Training F1 Macro: 0.8091 | Validation F1 Macro : 0.7355
Training F1 Micro: 0.8172 | Validation F1 Micro : 0.7340
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0727e-02 | Validation Loss : 3.1041e-02
Training CC : 0.8935 | Validation CC : 0.8903
** Classification Losses **
Training Loss : 4.3440e-01 | Validation Loss : 5.3125e-01
Training F1 Macro: 0.7929 | Validation F1 Macro : 0.7615
Training F1 Micro: 0.7930 | Validation F1 Micro : 0.7500
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9662e-02 | Validation Loss : 3.0327e-02
Training CC : 0.8968 | Validation CC : 0.8928
** Classification Losses **
Training Loss : 3.6459e-01 | Validation Loss : 5.5799e-01
Training F1 Macro: 0.8188 | Validation F1 Macro : 0.7534
Training F1 Micro: 0.8269 | Validation F1 Micro : 0.7440
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8934e-02 | Validation Loss : 2.9776e-02
Training CC : 0.8993 | Validation CC : 0.8951
** Classification Losses **
Training Loss : 3.7982e-01 | Validation Loss : 5.4811e-01
Training F1 Macro: 0.8486 | Validation F1 Macro : 0.7564
Training F1 Micro: 0.8434 | Validation F1 Micro : 0.7560
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8735e-02 | Validation Loss : 2.9181e-02
Training CC : 0.9007 | Validation CC : 0.8971
** Classification Losses **
Training Loss : 3.4800e-01 | Validation Loss : 6.0419e-01
Training F1 Macro: 0.8123 | Validation F1 Macro : 0.7318
Training F1 Micro: 0.8181 | Validation F1 Micro : 0.7240
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8010e-02 | Validation Loss : 2.9244e-02
Training CC : 0.9026 | Validation CC : 0.8969
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8471e-01 | Validation Loss : 5.9854e-01
Training F1 Macro: 0.8449 | Validation F1 Macro : 0.7364
Training F1 Micro: 0.8343 | Validation F1 Micro : 0.7260
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8354e-02 | Validation Loss : 2.9411e-02
Training CC : 0.9019 | Validation CC : 0.8963
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1756e-01 | Validation Loss : 5.7336e-01
Training F1 Macro: 0.8368 | Validation F1 Macro : 0.7381
Training F1 Micro: 0.8305 | Validation F1 Micro : 0.7380
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8461e-02 | Validation Loss : 2.9683e-02
Training CC : 0.9014 | Validation CC : 0.8953
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9948e-01 | Validation Loss : 5.2069e-01
Training F1 Macro: 0.8241 | Validation F1 Macro : 0.7758
Training F1 Micro: 0.8156 | Validation F1 Micro : 0.7660
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9214e-02 | Validation Loss : 2.9884e-02
Training CC : 0.8995 | Validation CC : 0.8946
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0036e-01 | Validation Loss : 5.0562e-01
Training F1 Macro: 0.8007 | Validation F1 Macro : 0.7717
Training F1 Micro: 0.7906 | Validation F1 Micro : 0.7620
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8715e-02 | Validation Loss : 2.9937e-02
Training CC : 0.9002 | Validation CC : 0.8944
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5862e-01 | Validation Loss : 5.0157e-01
Training F1 Macro: 0.8002 | Validation F1 Macro : 0.7681
Training F1 Micro: 0.7946 | Validation F1 Micro : 0.7660
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8398e-02 | Validation Loss : 2.9238e-02
Training CC : 0.9014 | Validation CC : 0.8972
** Classification Losses **
Training Loss : 4.5761e-01 | Validation Loss : 5.2705e-01
Training F1 Macro: 0.7409 | Validation F1 Macro : 0.7553
Training F1 Micro: 0.7535 | Validation F1 Micro : 0.7560
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7698e-02 | Validation Loss : 2.8680e-02
Training CC : 0.9039 | Validation CC : 0.8992
** Classification Losses **
Training Loss : 3.5938e-01 | Validation Loss : 5.0800e-01
Training F1 Macro: 0.8321 | Validation F1 Macro : 0.7591
Training F1 Micro: 0.8306 | Validation F1 Micro : 0.7580
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7246e-02 | Validation Loss : 2.8176e-02
Training CC : 0.9056 | Validation CC : 0.9010
** Classification Losses **
Training Loss : 4.3236e-01 | Validation Loss : 5.8342e-01
Training F1 Macro: 0.7617 | Validation F1 Macro : 0.7224
Training F1 Micro: 0.7706 | Validation F1 Micro : 0.7200
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6788e-02 | Validation Loss : 2.7746e-02
Training CC : 0.9074 | Validation CC : 0.9025
** Classification Losses **
Training Loss : 3.7665e-01 | Validation Loss : 5.7061e-01
Training F1 Macro: 0.8015 | Validation F1 Macro : 0.7411
Training F1 Micro: 0.7956 | Validation F1 Micro : 0.7380
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6554e-02 | Validation Loss : 2.7388e-02
Training CC : 0.9085 | Validation CC : 0.9039
** Classification Losses **
Training Loss : 4.4656e-01 | Validation Loss : 5.4581e-01
Training F1 Macro: 0.7434 | Validation F1 Macro : 0.7428
Training F1 Micro: 0.7507 | Validation F1 Micro : 0.7440
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6141e-02 | Validation Loss : 2.7447e-02
Training CC : 0.9097 | Validation CC : 0.9036
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6986e-01 | Validation Loss : 5.9786e-01
Training F1 Macro: 0.8039 | Validation F1 Macro : 0.7185
Training F1 Micro: 0.8126 | Validation F1 Micro : 0.7180
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6302e-02 | Validation Loss : 2.7569e-02
Training CC : 0.9093 | Validation CC : 0.9032
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9084e-01 | Validation Loss : 5.7423e-01
Training F1 Macro: 0.8162 | Validation F1 Macro : 0.7234
Training F1 Micro: 0.8156 | Validation F1 Micro : 0.7220
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6235e-02 | Validation Loss : 2.7746e-02
Training CC : 0.9092 | Validation CC : 0.9025
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3339e-01 | Validation Loss : 5.8953e-01
Training F1 Macro: 0.8077 | Validation F1 Macro : 0.7065
Training F1 Micro: 0.8024 | Validation F1 Micro : 0.7060
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6418e-02 | Validation Loss : 2.7905e-02
Training CC : 0.9086 | Validation CC : 0.9020
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.8879e-01 | Validation Loss : 5.5664e-01
Training F1 Macro: 0.7425 | Validation F1 Macro : 0.7423
Training F1 Micro: 0.7467 | Validation F1 Micro : 0.7380
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6855e-02 | Validation Loss : 2.8002e-02
Training CC : 0.9076 | Validation CC : 0.9016
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5700e-01 | Validation Loss : 5.4403e-01
Training F1 Macro: 0.8528 | Validation F1 Macro : 0.7525
Training F1 Micro: 0.8494 | Validation F1 Micro : 0.7440
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6308e-02 | Validation Loss : 2.7460e-02
Training CC : 0.9090 | Validation CC : 0.9036
** Classification Losses **
Training Loss : 3.9467e-01 | Validation Loss : 5.5483e-01
Training F1 Macro: 0.8179 | Validation F1 Macro : 0.7425
Training F1 Micro: 0.8064 | Validation F1 Micro : 0.7340
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6068e-02 | Validation Loss : 2.7100e-02
Training CC : 0.9103 | Validation CC : 0.9051
** Classification Losses **
Training Loss : 3.5998e-01 | Validation Loss : 5.2082e-01
Training F1 Macro: 0.8287 | Validation F1 Macro : 0.7460
Training F1 Micro: 0.8315 | Validation F1 Micro : 0.7360
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5490e-02 | Validation Loss : 2.6734e-02
Training CC : 0.9121 | Validation CC : 0.9064
** Classification Losses **
Training Loss : 3.3814e-01 | Validation Loss : 5.5643e-01
Training F1 Macro: 0.8244 | Validation F1 Macro : 0.7537
Training F1 Micro: 0.8241 | Validation F1 Micro : 0.7480
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5488e-02 | Validation Loss : 2.6432e-02
Training CC : 0.9127 | Validation CC : 0.9075
** Classification Losses **
Training Loss : 3.7491e-01 | Validation Loss : 5.9641e-01
Training F1 Macro: 0.8449 | Validation F1 Macro : 0.7145
Training F1 Micro: 0.8322 | Validation F1 Micro : 0.7040
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4863e-02 | Validation Loss : 2.6196e-02
Training CC : 0.9144 | Validation CC : 0.9084
** Classification Losses **
Training Loss : 4.4682e-01 | Validation Loss : 5.9405e-01
Training F1 Macro: 0.8037 | Validation F1 Macro : 0.7443
Training F1 Micro: 0.7999 | Validation F1 Micro : 0.7320
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4899e-02 | Validation Loss : 2.6256e-02
Training CC : 0.9146 | Validation CC : 0.9082
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7926e-01 | Validation Loss : 5.3080e-01
Training F1 Macro: 0.8063 | Validation F1 Macro : 0.7676
Training F1 Micro: 0.7981 | Validation F1 Micro : 0.7620
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4892e-02 | Validation Loss : 2.6373e-02
Training CC : 0.9145 | Validation CC : 0.9078
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0225e-01 | Validation Loss : 5.2423e-01
Training F1 Macro: 0.7974 | Validation F1 Macro : 0.7764
Training F1 Micro: 0.7941 | Validation F1 Micro : 0.7700
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4786e-02 | Validation Loss : 2.6532e-02
Training CC : 0.9145 | Validation CC : 0.9072
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7131e-01 | Validation Loss : 5.6678e-01
Training F1 Macro: 0.8332 | Validation F1 Macro : 0.7363
Training F1 Micro: 0.8212 | Validation F1 Micro : 0.7300
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5273e-02 | Validation Loss : 2.6707e-02
Training CC : 0.9133 | Validation CC : 0.9065
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1159e-01 | Validation Loss : 5.7730e-01
Training F1 Macro: 0.8509 | Validation F1 Macro : 0.7311
Training F1 Micro: 0.8413 | Validation F1 Micro : 0.7200
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5031e-02 | Validation Loss : 2.6840e-02
Training CC : 0.9136 | Validation CC : 0.9060
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6141e-01 | Validation Loss : 5.3144e-01
Training F1 Macro: 0.8230 | Validation F1 Macro : 0.7559
Training F1 Micro: 0.8134 | Validation F1 Micro : 0.7460
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5049e-02 | Validation Loss : 2.6312e-02
Training CC : 0.9139 | Validation CC : 0.9081
** Classification Losses **
Training Loss : 2.9199e-01 | Validation Loss : 5.3487e-01
Training F1 Macro: 0.8766 | Validation F1 Macro : 0.7653
Training F1 Micro: 0.8753 | Validation F1 Micro : 0.7520
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4924e-02 | Validation Loss : 2.6420e-02
Training CC : 0.9148 | Validation CC : 0.9077
** Classification Losses **
Training Loss : 4.1972e-01 | Validation Loss : 5.7379e-01
Training F1 Macro: 0.7729 | Validation F1 Macro : 0.7387
Training F1 Micro: 0.7662 | Validation F1 Micro : 0.7260
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4393e-02 | Validation Loss : 2.5803e-02
Training CC : 0.9160 | Validation CC : 0.9098
** Classification Losses **
Training Loss : 4.4480e-01 | Validation Loss : 5.2951e-01
Training F1 Macro: 0.8005 | Validation F1 Macro : 0.7453
Training F1 Micro: 0.7951 | Validation F1 Micro : 0.7400
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4270e-02 | Validation Loss : 2.5704e-02
Training CC : 0.9168 | Validation CC : 0.9103
** Classification Losses **
Training Loss : 3.2433e-01 | Validation Loss : 5.7021e-01
Training F1 Macro: 0.8411 | Validation F1 Macro : 0.7469
Training F1 Micro: 0.8441 | Validation F1 Micro : 0.7340
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3771e-02 | Validation Loss : 2.5484e-02
Training CC : 0.9181 | Validation CC : 0.9111
** Classification Losses **
Training Loss : 4.3248e-01 | Validation Loss : 6.1829e-01
Training F1 Macro: 0.7951 | Validation F1 Macro : 0.7141
Training F1 Micro: 0.7937 | Validation F1 Micro : 0.7040
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3642e-02 | Validation Loss : 2.5550e-02
Training CC : 0.9187 | Validation CC : 0.9108
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1233e-01 | Validation Loss : 5.6008e-01
Training F1 Macro: 0.7988 | Validation F1 Macro : 0.7542
Training F1 Micro: 0.8032 | Validation F1 Micro : 0.7480
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3770e-02 | Validation Loss : 2.5622e-02
Training CC : 0.9184 | Validation CC : 0.9106
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.6307e-01 | Validation Loss : 5.5549e-01
Training F1 Macro: 0.7754 | Validation F1 Macro : 0.7492
Training F1 Micro: 0.7729 | Validation F1 Micro : 0.7480
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3734e-02 | Validation Loss : 2.5696e-02
Training CC : 0.9184 | Validation CC : 0.9103
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5631e-01 | Validation Loss : 6.0641e-01
Training F1 Macro: 0.8362 | Validation F1 Macro : 0.7339
Training F1 Micro: 0.8331 | Validation F1 Micro : 0.7300
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4105e-02 | Validation Loss : 2.5790e-02
Training CC : 0.9176 | Validation CC : 0.9099
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7952e-01 | Validation Loss : 5.9527e-01
Training F1 Macro: 0.8380 | Validation F1 Macro : 0.7401
Training F1 Micro: 0.8335 | Validation F1 Micro : 0.7360
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4120e-02 | Validation Loss : 2.5882e-02
Training CC : 0.9174 | Validation CC : 0.9096
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0289e-01 | Validation Loss : 5.8908e-01
Training F1 Macro: 0.8087 | Validation F1 Macro : 0.7335
Training F1 Micro: 0.8013 | Validation F1 Micro : 0.7280
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4106e-02 | Validation Loss : 2.5505e-02
Training CC : 0.9175 | Validation CC : 0.9111
** Classification Losses **
Training Loss : 3.4969e-01 | Validation Loss : 5.7728e-01
Training F1 Macro: 0.8140 | Validation F1 Macro : 0.7386
Training F1 Micro: 0.8065 | Validation F1 Micro : 0.7280
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3497e-02 | Validation Loss : 2.5356e-02
Training CC : 0.9192 | Validation CC : 0.9116
** Classification Losses **
Training Loss : 4.1556e-01 | Validation Loss : 5.3647e-01
Training F1 Macro: 0.8023 | Validation F1 Macro : 0.7621
Training F1 Micro: 0.7928 | Validation F1 Micro : 0.7540
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4153e-02 | Validation Loss : 2.5202e-02
Training CC : 0.9185 | Validation CC : 0.9122
** Classification Losses **
Training Loss : 4.1088e-01 | Validation Loss : 5.8314e-01
Training F1 Macro: 0.7949 | Validation F1 Macro : 0.7154
Training F1 Micro: 0.7945 | Validation F1 Micro : 0.7060
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3310e-02 | Validation Loss : 2.4911e-02
Training CC : 0.9203 | Validation CC : 0.9130
** Classification Losses **
Training Loss : 3.9852e-01 | Validation Loss : 5.6851e-01
Training F1 Macro: 0.8083 | Validation F1 Macro : 0.7422
Training F1 Micro: 0.8039 | Validation F1 Micro : 0.7340
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3024e-02 | Validation Loss : 2.4861e-02
Training CC : 0.9211 | Validation CC : 0.9134
** Classification Losses **
Training Loss : 3.7699e-01 | Validation Loss : 5.9487e-01
Training F1 Macro: 0.7862 | Validation F1 Macro : 0.7403
Training F1 Micro: 0.7904 | Validation F1 Micro : 0.7320
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3126e-02 | Validation Loss : 2.4916e-02
Training CC : 0.9212 | Validation CC : 0.9132
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4168e-01 | Validation Loss : 5.7614e-01
Training F1 Macro: 0.8335 | Validation F1 Macro : 0.7326
Training F1 Micro: 0.8234 | Validation F1 Micro : 0.7240
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3048e-02 | Validation Loss : 2.4997e-02
Training CC : 0.9212 | Validation CC : 0.9129
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8551e-01 | Validation Loss : 6.0139e-01
Training F1 Macro: 0.8174 | Validation F1 Macro : 0.7413
Training F1 Micro: 0.8247 | Validation F1 Micro : 0.7340
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3018e-02 | Validation Loss : 2.5100e-02
Training CC : 0.9211 | Validation CC : 0.9125
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3081e-01 | Validation Loss : 5.8608e-01
Training F1 Macro: 0.7848 | Validation F1 Macro : 0.7140
Training F1 Micro: 0.7735 | Validation F1 Micro : 0.7080
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2907e-02 | Validation Loss : 2.5208e-02
Training CC : 0.9212 | Validation CC : 0.9121
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5351e-01 | Validation Loss : 6.1109e-01
Training F1 Macro: 0.7657 | Validation F1 Macro : 0.7284
Training F1 Micro: 0.7599 | Validation F1 Micro : 0.7200
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3122e-02 | Validation Loss : 2.5308e-02
Training CC : 0.9207 | Validation CC : 0.9118
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6361e-01 | Validation Loss : 6.0386e-01
Training F1 Macro: 0.8241 | Validation F1 Macro : 0.7256
Training F1 Micro: 0.8132 | Validation F1 Micro : 0.7140
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3056e-02 | Validation Loss : 2.4957e-02
Training CC : 0.9210 | Validation CC : 0.9130
** Classification Losses **
Training Loss : 4.2170e-01 | Validation Loss : 5.6179e-01
Training F1 Macro: 0.7956 | Validation F1 Macro : 0.7425
Training F1 Micro: 0.8047 | Validation F1 Micro : 0.7360
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3093e-02 | Validation Loss : 2.5146e-02
Training CC : 0.9213 | Validation CC : 0.9127
** Classification Losses **
Training Loss : 3.7884e-01 | Validation Loss : 5.6867e-01
Training F1 Macro: 0.8126 | Validation F1 Macro : 0.7540
Training F1 Micro: 0.8067 | Validation F1 Micro : 0.7460
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2546e-02 | Validation Loss : 2.4686e-02
Training CC : 0.9224 | Validation CC : 0.9138
** Classification Losses **
Training Loss : 4.2222e-01 | Validation Loss : 6.2159e-01
Training F1 Macro: 0.7825 | Validation F1 Macro : 0.7118
Training F1 Micro: 0.7797 | Validation F1 Micro : 0.6980
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2612e-02 | Validation Loss : 2.4582e-02
Training CC : 0.9227 | Validation CC : 0.9143
** Classification Losses **
Training Loss : 4.2889e-01 | Validation Loss : 5.7859e-01
Training F1 Macro: 0.7740 | Validation F1 Macro : 0.7158
Training F1 Micro: 0.7697 | Validation F1 Micro : 0.7060
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2528e-02 | Validation Loss : 2.4463e-02
Training CC : 0.9232 | Validation CC : 0.9148
** Classification Losses **
Training Loss : 3.8329e-01 | Validation Loss : 6.1039e-01
Training F1 Macro: 0.7925 | Validation F1 Macro : 0.7106
Training F1 Micro: 0.7894 | Validation F1 Micro : 0.7020
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2072e-02 | Validation Loss : 2.4514e-02
Training CC : 0.9241 | Validation CC : 0.9146
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7071e-01 | Validation Loss : 6.0722e-01
Training F1 Macro: 0.8271 | Validation F1 Macro : 0.7310
Training F1 Micro: 0.8188 | Validation F1 Micro : 0.7220
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2204e-02 | Validation Loss : 2.4625e-02
Training CC : 0.9238 | Validation CC : 0.9142
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2064e-01 | Validation Loss : 6.2107e-01
Training F1 Macro: 0.7986 | Validation F1 Macro : 0.7102
Training F1 Micro: 0.7944 | Validation F1 Micro : 0.7040
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2392e-02 | Validation Loss : 2.4719e-02
Training CC : 0.9234 | Validation CC : 0.9139
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4263e-01 | Validation Loss : 5.6745e-01
Training F1 Macro: 0.8325 | Validation F1 Macro : 0.7547
Training F1 Micro: 0.8257 | Validation F1 Micro : 0.7460
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2374e-02 | Validation Loss : 2.4803e-02
Training CC : 0.9233 | Validation CC : 0.9136
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3717e-01 | Validation Loss : 5.7305e-01
Training F1 Macro: 0.8534 | Validation F1 Macro : 0.7466
Training F1 Micro: 0.8576 | Validation F1 Micro : 0.7420
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2371e-02 | Validation Loss : 2.4908e-02
Training CC : 0.9232 | Validation CC : 0.9132
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6001e-01 | Validation Loss : 5.8773e-01
Training F1 Macro: 0.8123 | Validation F1 Macro : 0.7283
Training F1 Micro: 0.8101 | Validation F1 Micro : 0.7200
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2620e-02 | Validation Loss : 2.4528e-02
Training CC : 0.9228 | Validation CC : 0.9145
** Classification Losses **
Training Loss : 3.5589e-01 | Validation Loss : 6.1278e-01
Training F1 Macro: 0.8124 | Validation F1 Macro : 0.7304
Training F1 Micro: 0.8185 | Validation F1 Micro : 0.7180
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2158e-02 | Validation Loss : 2.4399e-02
Training CC : 0.9240 | Validation CC : 0.9149
** Classification Losses **
Training Loss : 3.4046e-01 | Validation Loss : 5.6121e-01
Training F1 Macro: 0.8194 | Validation F1 Macro : 0.7588
Training F1 Micro: 0.8118 | Validation F1 Micro : 0.7520
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2294e-02 | Validation Loss : 2.4357e-02
Training CC : 0.9241 | Validation CC : 0.9152
** Classification Losses **
Training Loss : 3.4365e-01 | Validation Loss : 6.0007e-01
Training F1 Macro: 0.8249 | Validation F1 Macro : 0.7232
Training F1 Micro: 0.8256 | Validation F1 Micro : 0.7140
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1913e-02 | Validation Loss : 2.4262e-02
Training CC : 0.9250 | Validation CC : 0.9156
** Classification Losses **
Training Loss : 4.2758e-01 | Validation Loss : 5.9473e-01
Training F1 Macro: 0.7793 | Validation F1 Macro : 0.7298
Training F1 Micro: 0.7731 | Validation F1 Micro : 0.7240
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2038e-02 | Validation Loss : 2.4159e-02
Training CC : 0.9250 | Validation CC : 0.9158
** Classification Losses **
Training Loss : 3.8056e-01 | Validation Loss : 6.0928e-01
Training F1 Macro: 0.8244 | Validation F1 Macro : 0.7033
Training F1 Micro: 0.8305 | Validation F1 Micro : 0.6980
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1665e-02 | Validation Loss : 2.4208e-02
Training CC : 0.9258 | Validation CC : 0.9156
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1217e-01 | Validation Loss : 6.1803e-01
Training F1 Macro: 0.7769 | Validation F1 Macro : 0.7200
Training F1 Micro: 0.8043 | Validation F1 Micro : 0.7120
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1674e-02 | Validation Loss : 2.4318e-02
Training CC : 0.9258 | Validation CC : 0.9152
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0955e-01 | Validation Loss : 6.2549e-01
Training F1 Macro: 0.7754 | Validation F1 Macro : 0.7208
Training F1 Micro: 0.7800 | Validation F1 Micro : 0.7140
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1838e-02 | Validation Loss : 2.4458e-02
Training CC : 0.9253 | Validation CC : 0.9147
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1871e-01 | Validation Loss : 6.1404e-01
Training F1 Macro: 0.7935 | Validation F1 Macro : 0.7067
Training F1 Micro: 0.7878 | Validation F1 Micro : 0.6960
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2034e-02 | Validation Loss : 2.4568e-02
Training CC : 0.9247 | Validation CC : 0.9143
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1650e-01 | Validation Loss : 6.2771e-01
Training F1 Macro: 0.7738 | Validation F1 Macro : 0.7052
Training F1 Micro: 0.7667 | Validation F1 Micro : 0.6960
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2400e-02 | Validation Loss : 2.4626e-02
Training CC : 0.9238 | Validation CC : 0.9141
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4699e-01 | Validation Loss : 5.8772e-01
Training F1 Macro: 0.8017 | Validation F1 Macro : 0.7396
Training F1 Micro: 0.8314 | Validation F1 Micro : 0.7300
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1895e-02 | Validation Loss : 2.4285e-02
Training CC : 0.9250 | Validation CC : 0.9154
** Classification Losses **
Training Loss : 3.1862e-01 | Validation Loss : 5.9350e-01
Training F1 Macro: 0.8514 | Validation F1 Macro : 0.7384
Training F1 Micro: 0.8435 | Validation F1 Micro : 0.7280
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1625e-02 | Validation Loss : 2.4121e-02
Training CC : 0.9260 | Validation CC : 0.9159
** Classification Losses **
Training Loss : 3.3211e-01 | Validation Loss : 6.2358e-01
Training F1 Macro: 0.8522 | Validation F1 Macro : 0.7215
Training F1 Micro: 0.8468 | Validation F1 Micro : 0.7120
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1503e-02 | Validation Loss : 2.4101e-02
Training CC : 0.9264 | Validation CC : 0.9163
** Classification Losses **
Training Loss : 3.2790e-01 | Validation Loss : 6.0764e-01
Training F1 Macro: 0.8433 | Validation F1 Macro : 0.7105
Training F1 Micro: 0.8326 | Validation F1 Micro : 0.7040
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1421e-02 | Validation Loss : 2.4065e-02
Training CC : 0.9268 | Validation CC : 0.9164
** Classification Losses **
Training Loss : 3.5878e-01 | Validation Loss : 5.7596e-01
Training F1 Macro: 0.8290 | Validation F1 Macro : 0.7326
Training F1 Micro: 0.8281 | Validation F1 Micro : 0.7240
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1521e-02 | Validation Loss : 2.3897e-02
Training CC : 0.9268 | Validation CC : 0.9167
** Classification Losses **
Training Loss : 3.3499e-01 | Validation Loss : 6.4869e-01
Training F1 Macro: 0.8526 | Validation F1 Macro : 0.6928
Training F1 Micro: 0.8450 | Validation F1 Micro : 0.6860
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1169e-02 | Validation Loss : 2.3934e-02
Training CC : 0.9276 | Validation CC : 0.9166
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9431e-01 | Validation Loss : 6.2145e-01
Training F1 Macro: 0.7923 | Validation F1 Macro : 0.7151
Training F1 Micro: 0.7836 | Validation F1 Micro : 0.7120
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1215e-02 | Validation Loss : 2.4016e-02
Training CC : 0.9274 | Validation CC : 0.9163
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7687e-01 | Validation Loss : 5.9309e-01
Training F1 Macro: 0.7579 | Validation F1 Macro : 0.7461
Training F1 Micro: 0.7421 | Validation F1 Micro : 0.7380
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2028e-02 | Validation Loss : 2.4138e-02
Training CC : 0.9257 | Validation CC : 0.9158
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4389e-01 | Validation Loss : 5.7833e-01
Training F1 Macro: 0.8417 | Validation F1 Macro : 0.7449
Training F1 Micro: 0.8401 | Validation F1 Micro : 0.7380
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1379e-02 | Validation Loss : 2.4226e-02
Training CC : 0.9268 | Validation CC : 0.9155
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.7702e-01 | Validation Loss : 6.1182e-01
Training F1 Macro: 0.8772 | Validation F1 Macro : 0.7289
Training F1 Micro: 0.8734 | Validation F1 Micro : 0.7200
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1643e-02 | Validation Loss : 2.4278e-02
Training CC : 0.9262 | Validation CC : 0.9153
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5215e-01 | Validation Loss : 5.5028e-01
Training F1 Macro: 0.8484 | Validation F1 Macro : 0.7424
Training F1 Micro: 0.8364 | Validation F1 Micro : 0.7320
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1598e-02 | Validation Loss : 2.4084e-02
Training CC : 0.9264 | Validation CC : 0.9160
** Classification Losses **
Training Loss : 4.3905e-01 | Validation Loss : 5.9170e-01
Training F1 Macro: 0.7500 | Validation F1 Macro : 0.7423
Training F1 Micro: 0.7497 | Validation F1 Micro : 0.7400
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1264e-02 | Validation Loss : 2.4153e-02
Training CC : 0.9274 | Validation CC : 0.9162
** Classification Losses **
Training Loss : 3.5328e-01 | Validation Loss : 5.7164e-01
Training F1 Macro: 0.8212 | Validation F1 Macro : 0.7407
Training F1 Micro: 0.8136 | Validation F1 Micro : 0.7320
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1021e-02 | Validation Loss : 2.3913e-02
Training CC : 0.9280 | Validation CC : 0.9167
** Classification Losses **
Training Loss : 3.3505e-01 | Validation Loss : 6.3600e-01
Training F1 Macro: 0.8345 | Validation F1 Macro : 0.7107
Training F1 Micro: 0.8368 | Validation F1 Micro : 0.7020
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1556e-02 | Validation Loss : 2.3826e-02
Training CC : 0.9272 | Validation CC : 0.9170
** Classification Losses **
Training Loss : 3.4284e-01 | Validation Loss : 6.1982e-01
Training F1 Macro: 0.8072 | Validation F1 Macro : 0.7127
Training F1 Micro: 0.8047 | Validation F1 Micro : 0.7040
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1138e-02 | Validation Loss : 2.3817e-02
Training CC : 0.9281 | Validation CC : 0.9171
** Classification Losses **
Training Loss : 4.1254e-01 | Validation Loss : 5.7886e-01
Training F1 Macro: 0.7940 | Validation F1 Macro : 0.7119
Training F1 Micro: 0.7890 | Validation F1 Micro : 0.7040
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.0758e-02 | Validation Loss : 2.3861e-02
Training CC : 0.9289 | Validation CC : 0.9169
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0942e-01 | Validation Loss : 5.7803e-01
Training F1 Macro: 0.7862 | Validation F1 Macro : 0.7448
Training F1 Micro: 0.7899 | Validation F1 Micro : 0.7380
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.0850e-02 | Validation Loss : 2.3959e-02
Training CC : 0.9287 | Validation CC : 0.9166
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2793e-01 | Validation Loss : 6.5627e-01
Training F1 Macro: 0.8218 | Validation F1 Macro : 0.6967
Training F1 Micro: 0.8185 | Validation F1 Micro : 0.6860
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.0871e-02 | Validation Loss : 2.4088e-02
Training CC : 0.9284 | Validation CC : 0.9161
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0652e-01 | Validation Loss : 5.9408e-01
Training F1 Macro: 0.8209 | Validation F1 Macro : 0.7178
Training F1 Micro: 0.8221 | Validation F1 Micro : 0.7060
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1213e-02 | Validation Loss : 2.4185e-02
Training CC : 0.9275 | Validation CC : 0.9157
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1615e-01 | Validation Loss : 6.0631e-01
Training F1 Macro: 0.8442 | Validation F1 Macro : 0.6919
Training F1 Micro: 0.8385 | Validation F1 Micro : 0.6800
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1519e-02 | Validation Loss : 2.4184e-02
Training CC : 0.9268 | Validation CC : 0.9157
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5332e-01 | Validation Loss : 6.2564e-01
Training F1 Macro: 0.8416 | Validation F1 Macro : 0.7112
Training F1 Micro: 0.8461 | Validation F1 Micro : 0.7040
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1488e-02 | Validation Loss : 2.4364e-02
Training CC : 0.9272 | Validation CC : 0.9156
** Classification Losses **
Training Loss : 3.7204e-01 | Validation Loss : 6.0788e-01
Training F1 Macro: 0.8248 | Validation F1 Macro : 0.7279
Training F1 Micro: 0.8156 | Validation F1 Micro : 0.7200
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1373e-02 | Validation Loss : 2.4325e-02
Training CC : 0.9273 | Validation CC : 0.9152
** Classification Losses **
Training Loss : 4.1656e-01 | Validation Loss : 6.2808e-01
Training F1 Macro: 0.7841 | Validation F1 Macro : 0.7173
Training F1 Micro: 0.7940 | Validation F1 Micro : 0.7100
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1553e-02 | Validation Loss : 2.4139e-02
Training CC : 0.9268 | Validation CC : 0.9162
** Classification Losses **
Training Loss : 4.0786e-01 | Validation Loss : 6.3124e-01
Training F1 Macro: 0.8098 | Validation F1 Macro : 0.6979
Training F1 Micro: 0.8017 | Validation F1 Micro : 0.6900
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.0898e-02 | Validation Loss : 2.3759e-02
Training CC : 0.9285 | Validation CC : 0.9172
** Classification Losses **
Training Loss : 3.9985e-01 | Validation Loss : 6.4735e-01
Training F1 Macro: 0.7712 | Validation F1 Macro : 0.6929
Training F1 Micro: 0.7804 | Validation F1 Micro : 0.6860
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1712e-02 | Validation Loss : 2.3826e-02
Training CC : 0.9274 | Validation CC : 0.9171
** Classification Losses **
Training Loss : 3.7654e-01 | Validation Loss : 6.1132e-01
Training F1 Macro: 0.8151 | Validation F1 Macro : 0.7074
Training F1 Micro: 0.8039 | Validation F1 Micro : 0.6960
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1966e-01 | Validation Loss : 3.3400e-01
Training CC : 0.1531 | Validation CC : 0.4131
** Classification Losses **
Training Loss : 1.4677e+00 | Validation Loss : 1.4916e+00
Training F1 Macro: 0.2473 | Validation F1 Macro : 0.2239
Training F1 Micro: 0.3191 | Validation F1 Micro : 0.2760
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9875e-01 | Validation Loss : 2.4657e-01
Training CC : 0.5190 | Validation CC : 0.6098
** Classification Losses **
Training Loss : 1.4494e+00 | Validation Loss : 1.4655e+00
Training F1 Macro: 0.1904 | Validation F1 Macro : 0.1885
Training F1 Micro: 0.2754 | Validation F1 Micro : 0.2540
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2123e-01 | Validation Loss : 1.8437e-01
Training CC : 0.6571 | Validation CC : 0.6929
** Classification Losses **
Training Loss : 1.4644e+00 | Validation Loss : 1.4951e+00
Training F1 Macro: 0.1646 | Validation F1 Macro : 0.1672
Training F1 Micro: 0.2523 | Validation F1 Micro : 0.2180
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.6514e-01 | Validation Loss : 1.3690e-01
Training CC : 0.7237 | Validation CC : 0.7449
** Classification Losses **
Training Loss : 1.4669e+00 | Validation Loss : 1.5221e+00
Training F1 Macro: 0.2036 | Validation F1 Macro : 0.1696
Training F1 Micro: 0.2851 | Validation F1 Micro : 0.2140
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.2185e-01 | Validation Loss : 1.0057e-01
Training CC : 0.7683 | Validation CC : 0.7815
** Classification Losses **
Training Loss : 1.4355e+00 | Validation Loss : 1.5233e+00
Training F1 Macro: 0.1851 | Validation F1 Macro : 0.1517
Training F1 Micro: 0.2881 | Validation F1 Micro : 0.2000
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0125e-01 | Validation Loss : 1.0240e-01
Training CC : 0.7865 | Validation CC : 0.7742
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3037e+00 | Validation Loss : 1.1748e+00
Training F1 Macro: 0.3090 | Validation F1 Macro : 0.4339
Training F1 Micro: 0.4014 | Validation F1 Micro : 0.4680
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0253e-01 | Validation Loss : 1.0282e-01
Training CC : 0.7815 | Validation CC : 0.7729
** Classification Losses ** <---- Now Optimizing
Training Loss : 9.6965e-01 | Validation Loss : 9.9231e-01
Training F1 Macro: 0.5907 | Validation F1 Macro : 0.5663
Training F1 Micro: 0.6424 | Validation F1 Micro : 0.5820
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0300e-01 | Validation Loss : 1.0354e-01
Training CC : 0.7801 | Validation CC : 0.7704
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.9070e-01 | Validation Loss : 8.9347e-01
Training F1 Macro: 0.7086 | Validation F1 Macro : 0.6152
Training F1 Micro: 0.7300 | Validation F1 Micro : 0.6200
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0438e-01 | Validation Loss : 1.0524e-01
Training CC : 0.7749 | Validation CC : 0.7639
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.2264e-01 | Validation Loss : 8.7302e-01
Training F1 Macro: 0.8294 | Validation F1 Macro : 0.6295
Training F1 Micro: 0.8273 | Validation F1 Micro : 0.6280
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 1.0647e-01 | Validation Loss : 1.0648e-01
Training CC : 0.7671 | Validation CC : 0.7590
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.1470e-01 | Validation Loss : 7.6671e-01
Training F1 Macro: 0.7688 | Validation F1 Macro : 0.6814
Training F1 Micro: 0.7687 | Validation F1 Micro : 0.6820
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 9.2800e-02 | Validation Loss : 7.7132e-02
Training CC : 0.7869 | Validation CC : 0.7985
** Classification Losses **
Training Loss : 5.5367e-01 | Validation Loss : 7.6040e-01
Training F1 Macro: 0.7859 | Validation F1 Macro : 0.6971
Training F1 Micro: 0.7906 | Validation F1 Micro : 0.6960
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.8551e-02 | Validation Loss : 5.8326e-02
Training CC : 0.8159 | Validation CC : 0.8226
** Classification Losses **
Training Loss : 5.8254e-01 | Validation Loss : 7.7131e-01
Training F1 Macro: 0.8247 | Validation F1 Macro : 0.6905
Training F1 Micro: 0.8246 | Validation F1 Micro : 0.6900
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.2868e-02 | Validation Loss : 4.7334e-02
Training CC : 0.8352 | Validation CC : 0.8393
** Classification Losses **
Training Loss : 6.2814e-01 | Validation Loss : 7.8214e-01
Training F1 Macro: 0.8029 | Validation F1 Macro : 0.6900
Training F1 Micro: 0.7883 | Validation F1 Micro : 0.6900
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.4494e-02 | Validation Loss : 4.1981e-02
Training CC : 0.8491 | Validation CC : 0.8502
** Classification Losses **
Training Loss : 6.0670e-01 | Validation Loss : 7.7619e-01
Training F1 Macro: 0.7696 | Validation F1 Macro : 0.6849
Training F1 Micro: 0.7769 | Validation F1 Micro : 0.6860
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9982e-02 | Validation Loss : 3.9547e-02
Training CC : 0.8591 | Validation CC : 0.8587
** Classification Losses **
Training Loss : 6.5061e-01 | Validation Loss : 8.0304e-01
Training F1 Macro: 0.7405 | Validation F1 Macro : 0.6817
Training F1 Micro: 0.7463 | Validation F1 Micro : 0.6800
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.8977e-02 | Validation Loss : 3.9863e-02
Training CC : 0.8630 | Validation CC : 0.8572
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.1244e-01 | Validation Loss : 7.0705e-01
Training F1 Macro: 0.7943 | Validation F1 Macro : 0.7225
Training F1 Micro: 0.7976 | Validation F1 Micro : 0.7220
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9352e-02 | Validation Loss : 4.1354e-02
Training CC : 0.8604 | Validation CC : 0.8511
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.6997e-01 | Validation Loss : 7.1056e-01
Training F1 Macro: 0.7962 | Validation F1 Macro : 0.7211
Training F1 Micro: 0.8052 | Validation F1 Micro : 0.7160
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.1660e-02 | Validation Loss : 4.3761e-02
Training CC : 0.8521 | Validation CC : 0.8416
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0541e-01 | Validation Loss : 6.5668e-01
Training F1 Macro: 0.8826 | Validation F1 Macro : 0.7387
Training F1 Micro: 0.8740 | Validation F1 Micro : 0.7340
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.4200e-02 | Validation Loss : 4.5871e-02
Training CC : 0.8422 | Validation CC : 0.8333
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2872e-01 | Validation Loss : 6.1920e-01
Training F1 Macro: 0.8164 | Validation F1 Macro : 0.7569
Training F1 Micro: 0.8081 | Validation F1 Micro : 0.7500
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 4.5712e-02 | Validation Loss : 4.6816e-02
Training CC : 0.8359 | Validation CC : 0.8296
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4588e-01 | Validation Loss : 6.8721e-01
Training F1 Macro: 0.7957 | Validation F1 Macro : 0.7136
Training F1 Micro: 0.7931 | Validation F1 Micro : 0.7040
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.1693e-02 | Validation Loss : 4.1203e-02
Training CC : 0.8515 | Validation CC : 0.8527
** Classification Losses **
Training Loss : 4.8592e-01 | Validation Loss : 5.9288e-01
Training F1 Macro: 0.7503 | Validation F1 Macro : 0.7648
Training F1 Micro: 0.7485 | Validation F1 Micro : 0.7560
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.8860e-02 | Validation Loss : 3.7875e-02
Training CC : 0.8637 | Validation CC : 0.8664
** Classification Losses **
Training Loss : 4.9829e-01 | Validation Loss : 6.3799e-01
Training F1 Macro: 0.7665 | Validation F1 Macro : 0.7243
Training F1 Micro: 0.7710 | Validation F1 Micro : 0.7220
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6797e-02 | Validation Loss : 3.6271e-02
Training CC : 0.8717 | Validation CC : 0.8715
** Classification Losses **
Training Loss : 4.2802e-01 | Validation Loss : 6.2996e-01
Training F1 Macro: 0.7830 | Validation F1 Macro : 0.7377
Training F1 Micro: 0.7985 | Validation F1 Micro : 0.7280
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4944e-02 | Validation Loss : 3.5286e-02
Training CC : 0.8777 | Validation CC : 0.8749
** Classification Losses **
Training Loss : 4.5550e-01 | Validation Loss : 6.0050e-01
Training F1 Macro: 0.8019 | Validation F1 Macro : 0.7565
Training F1 Micro: 0.8039 | Validation F1 Micro : 0.7520
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3727e-02 | Validation Loss : 3.4102e-02
Training CC : 0.8820 | Validation CC : 0.8788
** Classification Losses **
Training Loss : 4.4937e-01 | Validation Loss : 5.9803e-01
Training F1 Macro: 0.8071 | Validation F1 Macro : 0.7599
Training F1 Micro: 0.8017 | Validation F1 Micro : 0.7520
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3059e-02 | Validation Loss : 3.4228e-02
Training CC : 0.8839 | Validation CC : 0.8783
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1938e-01 | Validation Loss : 5.7900e-01
Training F1 Macro: 0.8627 | Validation F1 Macro : 0.7513
Training F1 Micro: 0.8554 | Validation F1 Micro : 0.7480
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3548e-02 | Validation Loss : 3.4471e-02
Training CC : 0.8829 | Validation CC : 0.8774
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7797e-01 | Validation Loss : 5.6856e-01
Training F1 Macro: 0.8333 | Validation F1 Macro : 0.7661
Training F1 Micro: 0.8251 | Validation F1 Micro : 0.7600
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3513e-02 | Validation Loss : 3.4819e-02
Training CC : 0.8824 | Validation CC : 0.8760
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3139e-01 | Validation Loss : 5.8371e-01
Training F1 Macro: 0.8514 | Validation F1 Macro : 0.7544
Training F1 Micro: 0.8479 | Validation F1 Micro : 0.7480
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3923e-02 | Validation Loss : 3.5228e-02
Training CC : 0.8809 | Validation CC : 0.8744
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0728e-01 | Validation Loss : 5.6259e-01
Training F1 Macro: 0.8096 | Validation F1 Macro : 0.7503
Training F1 Micro: 0.8002 | Validation F1 Micro : 0.7460
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.4509e-02 | Validation Loss : 3.5638e-02
Training CC : 0.8790 | Validation CC : 0.8729
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4091e-01 | Validation Loss : 5.8933e-01
Training F1 Macro: 0.8282 | Validation F1 Macro : 0.7550
Training F1 Micro: 0.8143 | Validation F1 Micro : 0.7480
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.3528e-02 | Validation Loss : 3.3859e-02
Training CC : 0.8822 | Validation CC : 0.8798
** Classification Losses **
Training Loss : 4.3709e-01 | Validation Loss : 5.9348e-01
Training F1 Macro: 0.7979 | Validation F1 Macro : 0.7477
Training F1 Micro: 0.7949 | Validation F1 Micro : 0.7420
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2837e-02 | Validation Loss : 3.2824e-02
Training CC : 0.8855 | Validation CC : 0.8836
** Classification Losses **
Training Loss : 3.3858e-01 | Validation Loss : 5.8207e-01
Training F1 Macro: 0.8465 | Validation F1 Macro : 0.7405
Training F1 Micro: 0.8441 | Validation F1 Micro : 0.7300
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1815e-02 | Validation Loss : 3.2315e-02
Training CC : 0.8893 | Validation CC : 0.8855
** Classification Losses **
Training Loss : 3.7630e-01 | Validation Loss : 5.8890e-01
Training F1 Macro: 0.7981 | Validation F1 Macro : 0.7355
Training F1 Micro: 0.7964 | Validation F1 Micro : 0.7300
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1164e-02 | Validation Loss : 3.1804e-02
Training CC : 0.8916 | Validation CC : 0.8875
** Classification Losses **
Training Loss : 4.0988e-01 | Validation Loss : 5.4395e-01
Training F1 Macro: 0.8052 | Validation F1 Macro : 0.7663
Training F1 Micro: 0.7978 | Validation F1 Micro : 0.7600
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0264e-02 | Validation Loss : 3.1277e-02
Training CC : 0.8943 | Validation CC : 0.8896
** Classification Losses **
Training Loss : 3.8597e-01 | Validation Loss : 5.9472e-01
Training F1 Macro: 0.8061 | Validation F1 Macro : 0.7390
Training F1 Micro: 0.8024 | Validation F1 Micro : 0.7300
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9974e-02 | Validation Loss : 3.1399e-02
Training CC : 0.8954 | Validation CC : 0.8891
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2471e-01 | Validation Loss : 6.0521e-01
Training F1 Macro: 0.7774 | Validation F1 Macro : 0.7343
Training F1 Micro: 0.7679 | Validation F1 Micro : 0.7300
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0303e-02 | Validation Loss : 3.1582e-02
Training CC : 0.8945 | Validation CC : 0.8883
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.7168e-01 | Validation Loss : 6.0673e-01
Training F1 Macro: 0.8717 | Validation F1 Macro : 0.7140
Training F1 Micro: 0.8633 | Validation F1 Micro : 0.7120
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0363e-02 | Validation Loss : 3.1816e-02
Training CC : 0.8941 | Validation CC : 0.8874
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8976e-01 | Validation Loss : 6.1307e-01
Training F1 Macro: 0.7989 | Validation F1 Macro : 0.7203
Training F1 Micro: 0.8243 | Validation F1 Micro : 0.7160
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0778e-02 | Validation Loss : 3.2040e-02
Training CC : 0.8929 | Validation CC : 0.8866
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3394e-01 | Validation Loss : 5.9589e-01
Training F1 Macro: 0.8316 | Validation F1 Macro : 0.7387
Training F1 Micro: 0.8443 | Validation F1 Micro : 0.7360
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.0971e-02 | Validation Loss : 3.2235e-02
Training CC : 0.8922 | Validation CC : 0.8858
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6850e-01 | Validation Loss : 5.9549e-01
Training F1 Macro: 0.7984 | Validation F1 Macro : 0.7347
Training F1 Micro: 0.8006 | Validation F1 Micro : 0.7360
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0231e-02 | Validation Loss : 3.1197e-02
Training CC : 0.8944 | Validation CC : 0.8898
** Classification Losses **
Training Loss : 4.5074e-01 | Validation Loss : 5.9799e-01
Training F1 Macro: 0.7361 | Validation F1 Macro : 0.7393
Training F1 Micro: 0.7531 | Validation F1 Micro : 0.7380
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0116e-02 | Validation Loss : 3.0601e-02
Training CC : 0.8957 | Validation CC : 0.8920
** Classification Losses **
Training Loss : 3.6174e-01 | Validation Loss : 5.8717e-01
Training F1 Macro: 0.8275 | Validation F1 Macro : 0.7415
Training F1 Micro: 0.8266 | Validation F1 Micro : 0.7400
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9926e-02 | Validation Loss : 3.0156e-02
Training CC : 0.8975 | Validation CC : 0.8936
** Classification Losses **
Training Loss : 4.0630e-01 | Validation Loss : 5.7962e-01
Training F1 Macro: 0.7893 | Validation F1 Macro : 0.7527
Training F1 Micro: 0.7858 | Validation F1 Micro : 0.7500
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9127e-02 | Validation Loss : 2.9877e-02
Training CC : 0.8997 | Validation CC : 0.8951
** Classification Losses **
Training Loss : 3.9059e-01 | Validation Loss : 6.3449e-01
Training F1 Macro: 0.7761 | Validation F1 Macro : 0.7176
Training F1 Micro: 0.7862 | Validation F1 Micro : 0.7180
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8259e-02 | Validation Loss : 2.9488e-02
Training CC : 0.9019 | Validation CC : 0.8962
** Classification Losses **
Training Loss : 3.9140e-01 | Validation Loss : 5.9225e-01
Training F1 Macro: 0.7673 | Validation F1 Macro : 0.7432
Training F1 Micro: 0.7632 | Validation F1 Micro : 0.7420
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8470e-02 | Validation Loss : 2.9550e-02
Training CC : 0.9019 | Validation CC : 0.8960
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5670e-01 | Validation Loss : 5.9887e-01
Training F1 Macro: 0.8120 | Validation F1 Macro : 0.7190
Training F1 Micro: 0.8053 | Validation F1 Micro : 0.7240
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8272e-02 | Validation Loss : 2.9638e-02
Training CC : 0.9022 | Validation CC : 0.8956
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5363e-01 | Validation Loss : 6.2453e-01
Training F1 Macro: 0.7765 | Validation F1 Macro : 0.7067
Training F1 Micro: 0.7740 | Validation F1 Micro : 0.7040
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8054e-02 | Validation Loss : 2.9749e-02
Training CC : 0.9024 | Validation CC : 0.8952
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0716e-01 | Validation Loss : 6.4434e-01
Training F1 Macro: 0.7956 | Validation F1 Macro : 0.6949
Training F1 Micro: 0.7946 | Validation F1 Micro : 0.6900
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8593e-02 | Validation Loss : 2.9862e-02
Training CC : 0.9013 | Validation CC : 0.8948
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6416e-01 | Validation Loss : 6.2431e-01
Training F1 Macro: 0.8326 | Validation F1 Macro : 0.7044
Training F1 Micro: 0.8299 | Validation F1 Micro : 0.6980
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8283e-02 | Validation Loss : 2.9955e-02
Training CC : 0.9016 | Validation CC : 0.8944
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4306e-01 | Validation Loss : 5.8922e-01
Training F1 Macro: 0.8185 | Validation F1 Macro : 0.7384
Training F1 Micro: 0.8117 | Validation F1 Micro : 0.7340
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8205e-02 | Validation Loss : 2.9421e-02
Training CC : 0.9023 | Validation CC : 0.8964
** Classification Losses **
Training Loss : 3.8005e-01 | Validation Loss : 6.1606e-01
Training F1 Macro: 0.8291 | Validation F1 Macro : 0.7209
Training F1 Micro: 0.8264 | Validation F1 Micro : 0.7100
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7903e-02 | Validation Loss : 2.9067e-02
Training CC : 0.9035 | Validation CC : 0.8978
** Classification Losses **
Training Loss : 3.5551e-01 | Validation Loss : 5.8321e-01
Training F1 Macro: 0.8290 | Validation F1 Macro : 0.7462
Training F1 Micro: 0.8341 | Validation F1 Micro : 0.7400
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7401e-02 | Validation Loss : 2.8790e-02
Training CC : 0.9052 | Validation CC : 0.8988
** Classification Losses **
Training Loss : 3.0228e-01 | Validation Loss : 6.3245e-01
Training F1 Macro: 0.8890 | Validation F1 Macro : 0.7096
Training F1 Micro: 0.8819 | Validation F1 Micro : 0.7020
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7330e-02 | Validation Loss : 2.8522e-02
Training CC : 0.9059 | Validation CC : 0.8999
** Classification Losses **
Training Loss : 4.3759e-01 | Validation Loss : 6.0969e-01
Training F1 Macro: 0.7881 | Validation F1 Macro : 0.7175
Training F1 Micro: 0.7751 | Validation F1 Micro : 0.7100
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6648e-02 | Validation Loss : 2.8287e-02
Training CC : 0.9076 | Validation CC : 0.9006
** Classification Losses **
Training Loss : 3.7620e-01 | Validation Loss : 6.3021e-01
Training F1 Macro: 0.8310 | Validation F1 Macro : 0.7216
Training F1 Micro: 0.8253 | Validation F1 Micro : 0.7160
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6989e-02 | Validation Loss : 2.8326e-02
Training CC : 0.9074 | Validation CC : 0.9005
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4106e-01 | Validation Loss : 6.3941e-01
Training F1 Macro: 0.8394 | Validation F1 Macro : 0.7235
Training F1 Micro: 0.8352 | Validation F1 Micro : 0.7220
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6823e-02 | Validation Loss : 2.8384e-02
Training CC : 0.9076 | Validation CC : 0.9003
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7943e-01 | Validation Loss : 6.2775e-01
Training F1 Macro: 0.8144 | Validation F1 Macro : 0.7180
Training F1 Micro: 0.8138 | Validation F1 Micro : 0.7060
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6982e-02 | Validation Loss : 2.8461e-02
Training CC : 0.9072 | Validation CC : 0.9000
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9003e-01 | Validation Loss : 6.3942e-01
Training F1 Macro: 0.8137 | Validation F1 Macro : 0.7242
Training F1 Micro: 0.7997 | Validation F1 Micro : 0.7160
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6873e-02 | Validation Loss : 2.8539e-02
Training CC : 0.9072 | Validation CC : 0.8997
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9484e-01 | Validation Loss : 6.2776e-01
Training F1 Macro: 0.7945 | Validation F1 Macro : 0.7071
Training F1 Micro: 0.7891 | Validation F1 Micro : 0.7020
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6845e-02 | Validation Loss : 2.8637e-02
Training CC : 0.9072 | Validation CC : 0.8993
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2824e-01 | Validation Loss : 6.1745e-01
Training F1 Macro: 0.8418 | Validation F1 Macro : 0.7063
Training F1 Micro: 0.8394 | Validation F1 Micro : 0.7040
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7263e-02 | Validation Loss : 2.8294e-02
Training CC : 0.9065 | Validation CC : 0.9006
** Classification Losses **
Training Loss : 3.3149e-01 | Validation Loss : 6.6390e-01
Training F1 Macro: 0.8517 | Validation F1 Macro : 0.6896
Training F1 Micro: 0.8488 | Validation F1 Micro : 0.6820
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7205e-02 | Validation Loss : 2.8016e-02
Training CC : 0.9074 | Validation CC : 0.9017
** Classification Losses **
Training Loss : 3.5219e-01 | Validation Loss : 6.1876e-01
Training F1 Macro: 0.8443 | Validation F1 Macro : 0.7360
Training F1 Micro: 0.8428 | Validation F1 Micro : 0.7320
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6194e-02 | Validation Loss : 2.7783e-02
Training CC : 0.9096 | Validation CC : 0.9025
** Classification Losses **
Training Loss : 2.9915e-01 | Validation Loss : 6.3457e-01
Training F1 Macro: 0.8393 | Validation F1 Macro : 0.6925
Training F1 Micro: 0.8467 | Validation F1 Micro : 0.6900
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5859e-02 | Validation Loss : 2.7638e-02
Training CC : 0.9106 | Validation CC : 0.9032
** Classification Losses **
Training Loss : 4.3482e-01 | Validation Loss : 6.3431e-01
Training F1 Macro: 0.7957 | Validation F1 Macro : 0.7076
Training F1 Micro: 0.7858 | Validation F1 Micro : 0.7040
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5720e-02 | Validation Loss : 2.7438e-02
Training CC : 0.9113 | Validation CC : 0.9039
** Classification Losses **
Training Loss : 4.0922e-01 | Validation Loss : 6.2595e-01
Training F1 Macro: 0.8132 | Validation F1 Macro : 0.7159
Training F1 Micro: 0.8018 | Validation F1 Micro : 0.7140
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5787e-02 | Validation Loss : 2.7478e-02
Training CC : 0.9115 | Validation CC : 0.9037
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4498e-01 | Validation Loss : 6.5355e-01
Training F1 Macro: 0.8449 | Validation F1 Macro : 0.7012
Training F1 Micro: 0.8457 | Validation F1 Micro : 0.6960
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5521e-02 | Validation Loss : 2.7549e-02
Training CC : 0.9119 | Validation CC : 0.9035
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4660e-01 | Validation Loss : 5.8067e-01
Training F1 Macro: 0.7662 | Validation F1 Macro : 0.7434
Training F1 Micro: 0.7701 | Validation F1 Micro : 0.7380
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5567e-02 | Validation Loss : 2.7644e-02
Training CC : 0.9117 | Validation CC : 0.9031
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5669e-01 | Validation Loss : 6.4318e-01
Training F1 Macro: 0.8547 | Validation F1 Macro : 0.7003
Training F1 Micro: 0.8472 | Validation F1 Micro : 0.6940
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5824e-02 | Validation Loss : 2.7731e-02
Training CC : 0.9111 | Validation CC : 0.9028
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6919e-01 | Validation Loss : 6.2489e-01
Training F1 Macro: 0.8445 | Validation F1 Macro : 0.7151
Training F1 Micro: 0.8374 | Validation F1 Micro : 0.7120
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6135e-02 | Validation Loss : 2.7800e-02
Training CC : 0.9104 | Validation CC : 0.9025
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2438e-01 | Validation Loss : 6.1637e-01
Training F1 Macro: 0.7690 | Validation F1 Macro : 0.7240
Training F1 Micro: 0.7697 | Validation F1 Micro : 0.7140
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5651e-02 | Validation Loss : 2.7476e-02
Training CC : 0.9114 | Validation CC : 0.9036
** Classification Losses **
Training Loss : 3.5511e-01 | Validation Loss : 6.2746e-01
Training F1 Macro: 0.8102 | Validation F1 Macro : 0.7045
Training F1 Micro: 0.8038 | Validation F1 Micro : 0.6960
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5620e-02 | Validation Loss : 2.7332e-02
Training CC : 0.9120 | Validation CC : 0.9043
** Classification Losses **
Training Loss : 3.8295e-01 | Validation Loss : 6.2926e-01
Training F1 Macro: 0.8098 | Validation F1 Macro : 0.7325
Training F1 Micro: 0.7992 | Validation F1 Micro : 0.7280
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5347e-02 | Validation Loss : 2.7123e-02
Training CC : 0.9131 | Validation CC : 0.9051
** Classification Losses **
Training Loss : 4.5440e-01 | Validation Loss : 6.5418e-01
Training F1 Macro: 0.7858 | Validation F1 Macro : 0.6985
Training F1 Micro: 0.7799 | Validation F1 Micro : 0.6940
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5499e-02 | Validation Loss : 2.6949e-02
Training CC : 0.9131 | Validation CC : 0.9056
** Classification Losses **
Training Loss : 3.7048e-01 | Validation Loss : 6.0820e-01
Training F1 Macro: 0.8352 | Validation F1 Macro : 0.7364
Training F1 Micro: 0.8297 | Validation F1 Micro : 0.7280
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5212e-02 | Validation Loss : 2.6911e-02
Training CC : 0.9138 | Validation CC : 0.9060
** Classification Losses **
Training Loss : 3.2729e-01 | Validation Loss : 6.2208e-01
Training F1 Macro: 0.8242 | Validation F1 Macro : 0.7187
Training F1 Micro: 0.8188 | Validation F1 Micro : 0.7140
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4688e-02 | Validation Loss : 2.6935e-02
Training CC : 0.9150 | Validation CC : 0.9059
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8846e-01 | Validation Loss : 6.4035e-01
Training F1 Macro: 0.8001 | Validation F1 Macro : 0.6846
Training F1 Micro: 0.7902 | Validation F1 Micro : 0.6760
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4769e-02 | Validation Loss : 2.6973e-02
Training CC : 0.9148 | Validation CC : 0.9057
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0254e-01 | Validation Loss : 6.1725e-01
Training F1 Macro: 0.7960 | Validation F1 Macro : 0.7118
Training F1 Micro: 0.7924 | Validation F1 Micro : 0.7100
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4785e-02 | Validation Loss : 2.7030e-02
Training CC : 0.9147 | Validation CC : 0.9055
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9054e-01 | Validation Loss : 6.8138e-01
Training F1 Macro: 0.7929 | Validation F1 Macro : 0.6845
Training F1 Micro: 0.7958 | Validation F1 Micro : 0.6800
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5048e-02 | Validation Loss : 2.7125e-02
Training CC : 0.9141 | Validation CC : 0.9051
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7134e-01 | Validation Loss : 6.3572e-01
Training F1 Macro: 0.8011 | Validation F1 Macro : 0.7055
Training F1 Micro: 0.8061 | Validation F1 Micro : 0.7000
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5413e-02 | Validation Loss : 2.7214e-02
Training CC : 0.9134 | Validation CC : 0.9048
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0935e-01 | Validation Loss : 6.1840e-01
Training F1 Macro: 0.8412 | Validation F1 Macro : 0.7211
Training F1 Micro: 0.8323 | Validation F1 Micro : 0.7160
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5143e-02 | Validation Loss : 2.6903e-02
Training CC : 0.9138 | Validation CC : 0.9058
** Classification Losses **
Training Loss : 3.6906e-01 | Validation Loss : 6.3774e-01
Training F1 Macro: 0.8297 | Validation F1 Macro : 0.7025
Training F1 Micro: 0.8272 | Validation F1 Micro : 0.6960
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4637e-02 | Validation Loss : 2.6772e-02
Training CC : 0.9151 | Validation CC : 0.9064
** Classification Losses **
Training Loss : 4.5661e-01 | Validation Loss : 6.1848e-01
Training F1 Macro: 0.7411 | Validation F1 Macro : 0.7331
Training F1 Micro: 0.7481 | Validation F1 Micro : 0.7300
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4654e-02 | Validation Loss : 2.6623e-02
Training CC : 0.9157 | Validation CC : 0.9069
** Classification Losses **
Training Loss : 3.8305e-01 | Validation Loss : 6.5365e-01
Training F1 Macro: 0.8198 | Validation F1 Macro : 0.7063
Training F1 Micro: 0.8092 | Validation F1 Micro : 0.7040
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4459e-02 | Validation Loss : 2.6482e-02
Training CC : 0.9163 | Validation CC : 0.9073
** Classification Losses **
Training Loss : 3.1839e-01 | Validation Loss : 6.3016e-01
Training F1 Macro: 0.8617 | Validation F1 Macro : 0.7199
Training F1 Micro: 0.8654 | Validation F1 Micro : 0.7180
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4494e-02 | Validation Loss : 2.6420e-02
Training CC : 0.9166 | Validation CC : 0.9078
** Classification Losses **
Training Loss : 3.9281e-01 | Validation Loss : 6.8096e-01
Training F1 Macro: 0.7779 | Validation F1 Macro : 0.6848
Training F1 Micro: 0.7850 | Validation F1 Micro : 0.6820
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4117e-02 | Validation Loss : 2.6444e-02
Training CC : 0.9174 | Validation CC : 0.9077
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8554e-01 | Validation Loss : 6.8953e-01
Training F1 Macro: 0.8089 | Validation F1 Macro : 0.6863
Training F1 Micro: 0.8125 | Validation F1 Micro : 0.6820
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4115e-02 | Validation Loss : 2.6478e-02
Training CC : 0.9173 | Validation CC : 0.9075
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9526e-01 | Validation Loss : 6.6377e-01
Training F1 Macro: 0.8004 | Validation F1 Macro : 0.6966
Training F1 Micro: 0.7919 | Validation F1 Micro : 0.6960
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3872e-02 | Validation Loss : 2.6542e-02
Training CC : 0.9177 | Validation CC : 0.9073
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4593e-01 | Validation Loss : 6.6965e-01
Training F1 Macro: 0.7727 | Validation F1 Macro : 0.7162
Training F1 Micro: 0.7646 | Validation F1 Micro : 0.7120
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4151e-02 | Validation Loss : 2.6612e-02
Training CC : 0.9170 | Validation CC : 0.9070
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1362e-01 | Validation Loss : 6.8941e-01
Training F1 Macro: 0.8439 | Validation F1 Macro : 0.6986
Training F1 Micro: 0.8330 | Validation F1 Micro : 0.6940
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4318e-02 | Validation Loss : 2.6695e-02
Training CC : 0.9166 | Validation CC : 0.9067
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1676e-01 | Validation Loss : 7.0568e-01
Training F1 Macro: 0.7846 | Validation F1 Macro : 0.6858
Training F1 Micro: 0.7782 | Validation F1 Micro : 0.6780
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4270e-02 | Validation Loss : 2.6459e-02
Training CC : 0.9167 | Validation CC : 0.9075
** Classification Losses **
Training Loss : 3.6877e-01 | Validation Loss : 6.7540e-01
Training F1 Macro: 0.8101 | Validation F1 Macro : 0.6919
Training F1 Micro: 0.8109 | Validation F1 Micro : 0.6900
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4159e-02 | Validation Loss : 2.6413e-02
Training CC : 0.9172 | Validation CC : 0.9079
** Classification Losses **
Training Loss : 4.0872e-01 | Validation Loss : 6.7681e-01
Training F1 Macro: 0.8245 | Validation F1 Macro : 0.7004
Training F1 Micro: 0.8116 | Validation F1 Micro : 0.6940
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3693e-02 | Validation Loss : 2.6243e-02
Training CC : 0.9183 | Validation CC : 0.9082
** Classification Losses **
Training Loss : 4.5130e-01 | Validation Loss : 6.5359e-01
Training F1 Macro: 0.7728 | Validation F1 Macro : 0.7268
Training F1 Micro: 0.7685 | Validation F1 Micro : 0.7240
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3489e-02 | Validation Loss : 2.6149e-02
Training CC : 0.9191 | Validation CC : 0.9087
** Classification Losses **
Training Loss : 4.2453e-01 | Validation Loss : 6.6739e-01
Training F1 Macro: 0.8146 | Validation F1 Macro : 0.6883
Training F1 Micro: 0.7978 | Validation F1 Micro : 0.6900
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3318e-02 | Validation Loss : 2.6037e-02
Training CC : 0.9198 | Validation CC : 0.9090
** Classification Losses **
Training Loss : 4.3074e-01 | Validation Loss : 6.7187e-01
Training F1 Macro: 0.8024 | Validation F1 Macro : 0.6842
Training F1 Micro: 0.7992 | Validation F1 Micro : 0.6820
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3353e-02 | Validation Loss : 2.6074e-02
Training CC : 0.9198 | Validation CC : 0.9089
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4899e-01 | Validation Loss : 6.9185e-01
Training F1 Macro: 0.7533 | Validation F1 Macro : 0.6940
Training F1 Micro: 0.7495 | Validation F1 Micro : 0.6900
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3319e-02 | Validation Loss : 2.6111e-02
Training CC : 0.9199 | Validation CC : 0.9088
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5328e-01 | Validation Loss : 6.7850e-01
Training F1 Macro: 0.8370 | Validation F1 Macro : 0.6871
Training F1 Micro: 0.8313 | Validation F1 Micro : 0.6820
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3326e-02 | Validation Loss : 2.6159e-02
Training CC : 0.9197 | Validation CC : 0.9086
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7618e-01 | Validation Loss : 6.5901e-01
Training F1 Macro: 0.8050 | Validation F1 Macro : 0.6860
Training F1 Micro: 0.8088 | Validation F1 Micro : 0.6820
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3525e-02 | Validation Loss : 2.6200e-02
Training CC : 0.9193 | Validation CC : 0.9084
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2128e-01 | Validation Loss : 6.9099e-01
Training F1 Macro: 0.8289 | Validation F1 Macro : 0.6671
Training F1 Micro: 0.8447 | Validation F1 Micro : 0.6660
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3573e-02 | Validation Loss : 2.6253e-02
Training CC : 0.9191 | Validation CC : 0.9082
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9227e-01 | Validation Loss : 6.9198e-01
Training F1 Macro: 0.7880 | Validation F1 Macro : 0.6587
Training F1 Micro: 0.7907 | Validation F1 Micro : 0.6600
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3457e-02 | Validation Loss : 2.6091e-02
Training CC : 0.9194 | Validation CC : 0.9088
** Classification Losses **
Training Loss : 4.2614e-01 | Validation Loss : 7.1844e-01
Training F1 Macro: 0.7554 | Validation F1 Macro : 0.6716
Training F1 Micro: 0.7602 | Validation F1 Micro : 0.6760
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3597e-02 | Validation Loss : 2.6100e-02
Training CC : 0.9195 | Validation CC : 0.9092
** Classification Losses **
Training Loss : 4.1952e-01 | Validation Loss : 7.1194e-01
Training F1 Macro: 0.7713 | Validation F1 Macro : 0.6759
Training F1 Micro: 0.7826 | Validation F1 Micro : 0.6780
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3265e-02 | Validation Loss : 2.5918e-02
Training CC : 0.9203 | Validation CC : 0.9094
** Classification Losses **
Training Loss : 4.2699e-01 | Validation Loss : 6.7836e-01
Training F1 Macro: 0.7921 | Validation F1 Macro : 0.6768
Training F1 Micro: 0.7958 | Validation F1 Micro : 0.6820
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3069e-02 | Validation Loss : 2.5865e-02
Training CC : 0.9209 | Validation CC : 0.9098
** Classification Losses **
Training Loss : 3.5707e-01 | Validation Loss : 7.4228e-01
Training F1 Macro: 0.8125 | Validation F1 Macro : 0.6507
Training F1 Micro: 0.8143 | Validation F1 Micro : 0.6520
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2788e-02 | Validation Loss : 2.5762e-02
Training CC : 0.9217 | Validation CC : 0.9101
** Classification Losses **
Training Loss : 2.9863e-01 | Validation Loss : 7.0813e-01
Training F1 Macro: 0.8345 | Validation F1 Macro : 0.6806
Training F1 Micro: 0.8376 | Validation F1 Micro : 0.6800
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2662e-02 | Validation Loss : 2.5784e-02
Training CC : 0.9221 | Validation CC : 0.9100
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8835e-01 | Validation Loss : 7.0407e-01
Training F1 Macro: 0.8007 | Validation F1 Macro : 0.6819
Training F1 Micro: 0.8048 | Validation F1 Micro : 0.6840
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2720e-02 | Validation Loss : 2.5825e-02
Training CC : 0.9219 | Validation CC : 0.9098
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8715e-01 | Validation Loss : 7.0007e-01
Training F1 Macro: 0.8141 | Validation F1 Macro : 0.6861
Training F1 Micro: 0.8140 | Validation F1 Micro : 0.6860
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3416e-02 | Validation Loss : 2.5861e-02
Training CC : 0.9207 | Validation CC : 0.9097
** Classification Losses ** <---- Now Optimizing
Training Loss : 2.6295e-01 | Validation Loss : 6.6893e-01
Training F1 Macro: 0.8385 | Validation F1 Macro : 0.7228
Training F1 Micro: 0.8489 | Validation F1 Micro : 0.7260
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2955e-02 | Validation Loss : 2.5900e-02
Training CC : 0.9213 | Validation CC : 0.9096
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3940e-01 | Validation Loss : 6.8013e-01
Training F1 Macro: 0.7616 | Validation F1 Macro : 0.6881
Training F1 Micro: 0.7548 | Validation F1 Micro : 0.6860
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2873e-02 | Validation Loss : 2.5969e-02
Training CC : 0.9214 | Validation CC : 0.9093
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3328e-01 | Validation Loss : 6.7581e-01
Training F1 Macro: 0.8638 | Validation F1 Macro : 0.6998
Training F1 Micro: 0.8649 | Validation F1 Micro : 0.6960
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2957e-02 | Validation Loss : 2.5851e-02
Training CC : 0.9213 | Validation CC : 0.9097
** Classification Losses **
Training Loss : 3.8367e-01 | Validation Loss : 6.9063e-01
Training F1 Macro: 0.7991 | Validation F1 Macro : 0.7078
Training F1 Micro: 0.8059 | Validation F1 Micro : 0.7060
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2671e-02 | Validation Loss : 2.5845e-02
Training CC : 0.9221 | Validation CC : 0.9099
** Classification Losses **
Training Loss : 3.7772e-01 | Validation Loss : 7.1201e-01
Training F1 Macro: 0.8200 | Validation F1 Macro : 0.6569
Training F1 Micro: 0.8133 | Validation F1 Micro : 0.6560
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2870e-02 | Validation Loss : 2.5693e-02
Training CC : 0.9221 | Validation CC : 0.9105
** Classification Losses **
Training Loss : 3.8927e-01 | Validation Loss : 7.0709e-01
Training F1 Macro: 0.8143 | Validation F1 Macro : 0.6826
Training F1 Micro: 0.8133 | Validation F1 Micro : 0.6820
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2603e-02 | Validation Loss : 2.5643e-02
Training CC : 0.9228 | Validation CC : 0.9105
** Classification Losses **
Training Loss : 3.5564e-01 | Validation Loss : 6.9926e-01
Training F1 Macro: 0.8297 | Validation F1 Macro : 0.6868
Training F1 Micro: 0.8250 | Validation F1 Micro : 0.6820
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3013e-02 | Validation Loss : 2.5640e-02
Training CC : 0.9221 | Validation CC : 0.9104
** Classification Losses **
Training Loss : 3.4489e-01 | Validation Loss : 7.4056e-01
Training F1 Macro: 0.8302 | Validation F1 Macro : 0.6606
Training F1 Micro: 0.8238 | Validation F1 Micro : 0.6600
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2391e-02 | Validation Loss : 2.5664e-02
Training CC : 0.9233 | Validation CC : 0.9103
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6841e-01 | Validation Loss : 7.6039e-01
Training F1 Macro: 0.8304 | Validation F1 Macro : 0.6684
Training F1 Micro: 0.8312 | Validation F1 Micro : 0.6660
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2358e-02 | Validation Loss : 2.5668e-02
Training CC : 0.9233 | Validation CC : 0.9103
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9732e-01 | Validation Loss : 7.2494e-01
Training F1 Macro: 0.8087 | Validation F1 Macro : 0.6757
Training F1 Micro: 0.8007 | Validation F1 Micro : 0.6740
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2761e-02 | Validation Loss : 2.5702e-02
Training CC : 0.9226 | Validation CC : 0.9102
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6008e-01 | Validation Loss : 6.8667e-01
Training F1 Macro: 0.8002 | Validation F1 Macro : 0.6990
Training F1 Micro: 0.7893 | Validation F1 Micro : 0.6920
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2754e-02 | Validation Loss : 2.5753e-02
Training CC : 0.9224 | Validation CC : 0.9100
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4834e-01 | Validation Loss : 6.6372e-01
Training F1 Macro: 0.8294 | Validation F1 Macro : 0.7160
Training F1 Micro: 0.8239 | Validation F1 Micro : 0.7120
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2723e-02 | Validation Loss : 2.5790e-02
Training CC : 0.9224 | Validation CC : 0.9098
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1404e-01 | Validation Loss : 7.1661e-01
Training F1 Macro: 0.8166 | Validation F1 Macro : 0.6611
Training F1 Micro: 0.8019 | Validation F1 Micro : 0.6540
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3061e-02 | Validation Loss : 2.5732e-02
Training CC : 0.9220 | Validation CC : 0.9106
** Classification Losses **
Training Loss : 3.0518e-01 | Validation Loss : 6.8744e-01
Training F1 Macro: 0.8599 | Validation F1 Macro : 0.7005
Training F1 Micro: 0.8472 | Validation F1 Micro : 0.6940
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2385e-02 | Validation Loss : 2.5616e-02
Training CC : 0.9233 | Validation CC : 0.9105
** Classification Losses **
Training Loss : 3.8298e-01 | Validation Loss : 7.4499e-01
Training F1 Macro: 0.8155 | Validation F1 Macro : 0.6592
Training F1 Micro: 0.8053 | Validation F1 Micro : 0.6580
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2120e-02 | Validation Loss : 2.5567e-02
Training CC : 0.9242 | Validation CC : 0.9110
** Classification Losses **
Training Loss : 4.1748e-01 | Validation Loss : 7.2154e-01
Training F1 Macro: 0.7798 | Validation F1 Macro : 0.6986
Training F1 Micro: 0.7756 | Validation F1 Micro : 0.6940
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2409e-02 | Validation Loss : 2.5589e-02
Training CC : 0.9237 | Validation CC : 0.9107
** Classification Losses **
Training Loss : 4.0631e-01 | Validation Loss : 7.1697e-01
Training F1 Macro: 0.7817 | Validation F1 Macro : 0.6745
Training F1 Micro: 0.7816 | Validation F1 Micro : 0.6700
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2253e-02 | Validation Loss : 2.5647e-02
Training CC : 0.9241 | Validation CC : 0.9105
** Classification Losses **
Training Loss : 4.3359e-01 | Validation Loss : 7.0499e-01
Training F1 Macro: 0.7928 | Validation F1 Macro : 0.7013
Training F1 Micro: 0.7802 | Validation F1 Micro : 0.6960
Epoch 1, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0197e-01 | Validation Loss : 2.3149e-01
Training CC : 0.3489 | Validation CC : 0.5917
** Classification Losses **
Training Loss : 1.4605e+00 | Validation Loss : 1.4843e+00
Training F1 Macro: 0.1911 | Validation F1 Macro : 0.1914
Training F1 Micro: 0.2167 | Validation F1 Micro : 0.2060
Epoch 1, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.0277e-01 | Validation Loss : 1.6424e-01
Training CC : 0.6549 | Validation CC : 0.6979
** Classification Losses **
Training Loss : 1.4900e+00 | Validation Loss : 1.4702e+00
Training F1 Macro: 0.1500 | Validation F1 Macro : 0.2054
Training F1 Micro: 0.1657 | Validation F1 Micro : 0.2220
Epoch 1, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.4551e-01 | Validation Loss : 1.1988e-01
Training CC : 0.7304 | Validation CC : 0.7541
** Classification Losses **
Training Loss : 1.4602e+00 | Validation Loss : 1.4715e+00
Training F1 Macro: 0.2057 | Validation F1 Macro : 0.1968
Training F1 Micro: 0.2180 | Validation F1 Micro : 0.2080
Epoch 1, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 1.0722e-01 | Validation Loss : 8.9729e-02
Training CC : 0.7739 | Validation CC : 0.7857
** Classification Losses **
Training Loss : 1.4445e+00 | Validation Loss : 1.4737e+00
Training F1 Macro: 0.2283 | Validation F1 Macro : 0.2234
Training F1 Micro: 0.2279 | Validation F1 Micro : 0.2360
Epoch 1, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 8.1181e-02 | Validation Loss : 6.9472e-02
Training CC : 0.8021 | Validation CC : 0.8107
** Classification Losses **
Training Loss : 1.4491e+00 | Validation Loss : 1.4541e+00
Training F1 Macro: 0.2068 | Validation F1 Macro : 0.2342
Training F1 Micro: 0.2156 | Validation F1 Micro : 0.2480
Epoch 2, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.0380e-02 | Validation Loss : 7.2687e-02
Training CC : 0.8125 | Validation CC : 0.7988
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.3271e+00 | Validation Loss : 1.1621e+00
Training F1 Macro: 0.3639 | Validation F1 Macro : 0.5758
Training F1 Micro: 0.3813 | Validation F1 Micro : 0.5940
Epoch 2, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.3132e-02 | Validation Loss : 7.4078e-02
Training CC : 0.8021 | Validation CC : 0.7944
** Classification Losses ** <---- Now Optimizing
Training Loss : 1.0255e+00 | Validation Loss : 1.0084e+00
Training F1 Macro: 0.6935 | Validation F1 Macro : 0.6672
Training F1 Micro: 0.6962 | Validation F1 Micro : 0.6740
Epoch 2, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.4071e-02 | Validation Loss : 7.4281e-02
Training CC : 0.7997 | Validation CC : 0.7941
** Classification Losses ** <---- Now Optimizing
Training Loss : 8.8319e-01 | Validation Loss : 9.1640e-01
Training F1 Macro: 0.6904 | Validation F1 Macro : 0.6688
Training F1 Micro: 0.6982 | Validation F1 Micro : 0.6800
Epoch 2, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.4746e-02 | Validation Loss : 7.5091e-02
Training CC : 0.7979 | Validation CC : 0.7909
** Classification Losses ** <---- Now Optimizing
Training Loss : 7.7825e-01 | Validation Loss : 8.3387e-01
Training F1 Macro: 0.7179 | Validation F1 Macro : 0.6894
Training F1 Micro: 0.7262 | Validation F1 Micro : 0.6960
Epoch 2, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 7.5291e-02 | Validation Loss : 7.5628e-02
Training CC : 0.7950 | Validation CC : 0.7887
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.5337e-01 | Validation Loss : 7.8903e-01
Training F1 Macro: 0.7657 | Validation F1 Macro : 0.7095
Training F1 Micro: 0.7635 | Validation F1 Micro : 0.7120
Epoch 3, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 6.7588e-02 | Validation Loss : 5.7769e-02
Training CC : 0.8110 | Validation CC : 0.8229
** Classification Losses **
Training Loss : 6.0802e-01 | Validation Loss : 7.8866e-01
Training F1 Macro: 0.8111 | Validation F1 Macro : 0.6816
Training F1 Micro: 0.8094 | Validation F1 Micro : 0.6840
Epoch 3, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 5.3393e-02 | Validation Loss : 4.8344e-02
Training CC : 0.8358 | Validation CC : 0.8402
** Classification Losses **
Training Loss : 6.6480e-01 | Validation Loss : 8.2873e-01
Training F1 Macro: 0.7412 | Validation F1 Macro : 0.6540
Training F1 Micro: 0.7526 | Validation F1 Micro : 0.6620
Epoch 3, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 4.5314e-02 | Validation Loss : 4.2263e-02
Training CC : 0.8510 | Validation CC : 0.8539
** Classification Losses **
Training Loss : 7.0259e-01 | Validation Loss : 8.2725e-01
Training F1 Macro: 0.7183 | Validation F1 Macro : 0.6823
Training F1 Micro: 0.7329 | Validation F1 Micro : 0.6860
Epoch 3, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.9825e-02 | Validation Loss : 3.8743e-02
Training CC : 0.8637 | Validation CC : 0.8625
** Classification Losses **
Training Loss : 6.6228e-01 | Validation Loss : 8.1689e-01
Training F1 Macro: 0.7491 | Validation F1 Macro : 0.6818
Training F1 Micro: 0.7564 | Validation F1 Micro : 0.6880
Epoch 3, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.6840e-02 | Validation Loss : 3.6632e-02
Training CC : 0.8710 | Validation CC : 0.8693
** Classification Losses **
Training Loss : 6.7440e-01 | Validation Loss : 8.4142e-01
Training F1 Macro: 0.7661 | Validation F1 Macro : 0.6724
Training F1 Micro: 0.7562 | Validation F1 Micro : 0.6760
Epoch 4, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.5820e-02 | Validation Loss : 3.6824e-02
Training CC : 0.8744 | Validation CC : 0.8684
** Classification Losses ** <---- Now Optimizing
Training Loss : 6.8808e-01 | Validation Loss : 7.3805e-01
Training F1 Macro: 0.7464 | Validation F1 Macro : 0.7388
Training F1 Micro: 0.7413 | Validation F1 Micro : 0.7400
Epoch 4, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.6164e-02 | Validation Loss : 3.8169e-02
Training CC : 0.8725 | Validation CC : 0.8632
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.9779e-01 | Validation Loss : 6.3317e-01
Training F1 Macro: 0.8334 | Validation F1 Macro : 0.7853
Training F1 Micro: 0.8377 | Validation F1 Micro : 0.7880
Epoch 4, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.7831e-02 | Validation Loss : 3.9972e-02
Training CC : 0.8663 | Validation CC : 0.8563
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.0947e-01 | Validation Loss : 6.4918e-01
Training F1 Macro: 0.7709 | Validation F1 Macro : 0.7541
Training F1 Micro: 0.7926 | Validation F1 Micro : 0.7560
Epoch 4, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9445e-02 | Validation Loss : 4.0841e-02
Training CC : 0.8605 | Validation CC : 0.8529
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7208e-01 | Validation Loss : 6.3017e-01
Training F1 Macro: 0.8170 | Validation F1 Macro : 0.7388
Training F1 Micro: 0.8211 | Validation F1 Micro : 0.7420
Epoch 4, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.9783e-02 | Validation Loss : 4.0886e-02
Training CC : 0.8588 | Validation CC : 0.8526
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.0036e-01 | Validation Loss : 5.9238e-01
Training F1 Macro: 0.7579 | Validation F1 Macro : 0.7381
Training F1 Micro: 0.7655 | Validation F1 Micro : 0.7440
Epoch 5, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.7908e-02 | Validation Loss : 3.7267e-02
Training CC : 0.8667 | Validation CC : 0.8668
** Classification Losses **
Training Loss : 4.8371e-01 | Validation Loss : 5.7448e-01
Training F1 Macro: 0.7412 | Validation F1 Macro : 0.7501
Training F1 Micro: 0.7555 | Validation F1 Micro : 0.7560
Epoch 5, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.5606e-02 | Validation Loss : 3.5508e-02
Training CC : 0.8762 | Validation CC : 0.8737
** Classification Losses **
Training Loss : 4.1504e-01 | Validation Loss : 5.7683e-01
Training F1 Macro: 0.7829 | Validation F1 Macro : 0.7749
Training F1 Micro: 0.7997 | Validation F1 Micro : 0.7760
Epoch 5, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.4164e-02 | Validation Loss : 3.4250e-02
Training CC : 0.8811 | Validation CC : 0.8784
** Classification Losses **
Training Loss : 5.0010e-01 | Validation Loss : 5.9049e-01
Training F1 Macro: 0.7612 | Validation F1 Macro : 0.7614
Training F1 Micro: 0.7668 | Validation F1 Micro : 0.7680
Epoch 5, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2560e-02 | Validation Loss : 3.3261e-02
Training CC : 0.8862 | Validation CC : 0.8820
** Classification Losses **
Training Loss : 4.6173e-01 | Validation Loss : 6.6362e-01
Training F1 Macro: 0.8019 | Validation F1 Macro : 0.7121
Training F1 Micro: 0.8065 | Validation F1 Micro : 0.7120
Epoch 5, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1465e-02 | Validation Loss : 3.2422e-02
Training CC : 0.8899 | Validation CC : 0.8852
** Classification Losses **
Training Loss : 4.6650e-01 | Validation Loss : 6.4319e-01
Training F1 Macro: 0.8052 | Validation F1 Macro : 0.7295
Training F1 Micro: 0.8100 | Validation F1 Micro : 0.7300
Epoch 6, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1462e-02 | Validation Loss : 3.2472e-02
Training CC : 0.8907 | Validation CC : 0.8849
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2767e-01 | Validation Loss : 6.0798e-01
Training F1 Macro: 0.8316 | Validation F1 Macro : 0.7443
Training F1 Micro: 0.8215 | Validation F1 Micro : 0.7440
Epoch 6, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1099e-02 | Validation Loss : 3.2759e-02
Training CC : 0.8912 | Validation CC : 0.8838
** Classification Losses ** <---- Now Optimizing
Training Loss : 5.0117e-01 | Validation Loss : 5.2688e-01
Training F1 Macro: 0.7482 | Validation F1 Macro : 0.7836
Training F1 Micro: 0.7511 | Validation F1 Micro : 0.7900
Epoch 6, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.1998e-02 | Validation Loss : 3.3335e-02
Training CC : 0.8890 | Validation CC : 0.8816
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5583e-01 | Validation Loss : 5.7787e-01
Training F1 Macro: 0.8394 | Validation F1 Macro : 0.7407
Training F1 Micro: 0.8390 | Validation F1 Micro : 0.7460
Epoch 6, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.2205e-02 | Validation Loss : 3.4036e-02
Training CC : 0.8873 | Validation CC : 0.8789
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1427e-01 | Validation Loss : 5.4280e-01
Training F1 Macro: 0.7708 | Validation F1 Macro : 0.7920
Training F1 Micro: 0.7637 | Validation F1 Micro : 0.7880
Epoch 6, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 3.3160e-02 | Validation Loss : 3.4356e-02
Training CC : 0.8846 | Validation CC : 0.8777
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3473e-01 | Validation Loss : 5.4040e-01
Training F1 Macro: 0.7957 | Validation F1 Macro : 0.7519
Training F1 Micro: 0.7942 | Validation F1 Micro : 0.7540
Epoch 7, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.2036e-02 | Validation Loss : 3.2585e-02
Training CC : 0.8883 | Validation CC : 0.8852
** Classification Losses **
Training Loss : 3.6082e-01 | Validation Loss : 5.5782e-01
Training F1 Macro: 0.8307 | Validation F1 Macro : 0.7494
Training F1 Micro: 0.8319 | Validation F1 Micro : 0.7460
Epoch 7, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.1094e-02 | Validation Loss : 3.1647e-02
Training CC : 0.8922 | Validation CC : 0.8879
** Classification Losses **
Training Loss : 4.0238e-01 | Validation Loss : 5.8465e-01
Training F1 Macro: 0.7793 | Validation F1 Macro : 0.7430
Training F1 Micro: 0.8168 | Validation F1 Micro : 0.7400
Epoch 7, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 3.0182e-02 | Validation Loss : 3.1152e-02
Training CC : 0.8951 | Validation CC : 0.8899
** Classification Losses **
Training Loss : 4.1159e-01 | Validation Loss : 6.1185e-01
Training F1 Macro: 0.7708 | Validation F1 Macro : 0.6954
Training F1 Micro: 0.7678 | Validation F1 Micro : 0.7000
Epoch 7, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9477e-02 | Validation Loss : 3.0550e-02
Training CC : 0.8975 | Validation CC : 0.8921
** Classification Losses **
Training Loss : 3.5576e-01 | Validation Loss : 5.3233e-01
Training F1 Macro: 0.8142 | Validation F1 Macro : 0.7746
Training F1 Micro: 0.8099 | Validation F1 Micro : 0.7720
Epoch 7, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8813e-02 | Validation Loss : 3.0108e-02
Training CC : 0.8997 | Validation CC : 0.8937
** Classification Losses **
Training Loss : 4.1480e-01 | Validation Loss : 5.5260e-01
Training F1 Macro: 0.7874 | Validation F1 Macro : 0.7648
Training F1 Micro: 0.7830 | Validation F1 Micro : 0.7660
Epoch 8, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8684e-02 | Validation Loss : 3.0178e-02
Training CC : 0.9007 | Validation CC : 0.8934
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.4203e-01 | Validation Loss : 5.8121e-01
Training F1 Macro: 0.7870 | Validation F1 Macro : 0.7375
Training F1 Micro: 0.7914 | Validation F1 Micro : 0.7340
Epoch 8, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8682e-02 | Validation Loss : 3.0420e-02
Training CC : 0.9004 | Validation CC : 0.8925
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4058e-01 | Validation Loss : 5.6976e-01
Training F1 Macro: 0.8524 | Validation F1 Macro : 0.7295
Training F1 Micro: 0.8488 | Validation F1 Micro : 0.7260
Epoch 8, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9323e-02 | Validation Loss : 3.0806e-02
Training CC : 0.8988 | Validation CC : 0.8911
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6079e-01 | Validation Loss : 5.7160e-01
Training F1 Macro: 0.8155 | Validation F1 Macro : 0.7441
Training F1 Micro: 0.8137 | Validation F1 Micro : 0.7440
Epoch 8, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9844e-02 | Validation Loss : 3.1258e-02
Training CC : 0.8973 | Validation CC : 0.8894
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9985e-01 | Validation Loss : 5.3679e-01
Training F1 Macro: 0.7729 | Validation F1 Macro : 0.7537
Training F1 Micro: 0.7711 | Validation F1 Micro : 0.7520
Epoch 8, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.9564e-02 | Validation Loss : 3.1642e-02
Training CC : 0.8970 | Validation CC : 0.8879
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9163e-01 | Validation Loss : 5.7624e-01
Training F1 Macro: 0.7897 | Validation F1 Macro : 0.7406
Training F1 Micro: 0.7872 | Validation F1 Micro : 0.7440
Epoch 9, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9504e-02 | Validation Loss : 3.0367e-02
Training CC : 0.8979 | Validation CC : 0.8935
** Classification Losses **
Training Loss : 3.7180e-01 | Validation Loss : 5.4742e-01
Training F1 Macro: 0.8131 | Validation F1 Macro : 0.7398
Training F1 Micro: 0.8224 | Validation F1 Micro : 0.7400
Epoch 9, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.9137e-02 | Validation Loss : 3.0019e-02
Training CC : 0.9002 | Validation CC : 0.8941
** Classification Losses **
Training Loss : 3.9191e-01 | Validation Loss : 5.8978e-01
Training F1 Macro: 0.7793 | Validation F1 Macro : 0.6987
Training F1 Micro: 0.7928 | Validation F1 Micro : 0.7000
Epoch 9, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8721e-02 | Validation Loss : 2.9345e-02
Training CC : 0.9016 | Validation CC : 0.8966
** Classification Losses **
Training Loss : 3.5451e-01 | Validation Loss : 5.7847e-01
Training F1 Macro: 0.8137 | Validation F1 Macro : 0.7267
Training F1 Micro: 0.8212 | Validation F1 Micro : 0.7280
Epoch 9, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7724e-02 | Validation Loss : 2.9264e-02
Training CC : 0.9043 | Validation CC : 0.8976
** Classification Losses **
Training Loss : 4.2295e-01 | Validation Loss : 5.6698e-01
Training F1 Macro: 0.7636 | Validation F1 Macro : 0.7508
Training F1 Micro: 0.7624 | Validation F1 Micro : 0.7520
Epoch 9, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.8206e-02 | Validation Loss : 2.8750e-02
Training CC : 0.9046 | Validation CC : 0.8988
** Classification Losses **
Training Loss : 3.2027e-01 | Validation Loss : 5.7859e-01
Training F1 Macro: 0.8165 | Validation F1 Macro : 0.7273
Training F1 Micro: 0.8234 | Validation F1 Micro : 0.7280
Epoch 10, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7662e-02 | Validation Loss : 2.8916e-02
Training CC : 0.9055 | Validation CC : 0.8982
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3923e-01 | Validation Loss : 5.9353e-01
Training F1 Macro: 0.7546 | Validation F1 Macro : 0.7376
Training F1 Micro: 0.7538 | Validation F1 Micro : 0.7420
Epoch 10, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7214e-02 | Validation Loss : 2.9195e-02
Training CC : 0.9059 | Validation CC : 0.8971
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8852e-01 | Validation Loss : 5.4495e-01
Training F1 Macro: 0.7866 | Validation F1 Macro : 0.7551
Training F1 Micro: 0.7940 | Validation F1 Micro : 0.7560
Epoch 10, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7493e-02 | Validation Loss : 2.9508e-02
Training CC : 0.9049 | Validation CC : 0.8959
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6255e-01 | Validation Loss : 5.6267e-01
Training F1 Macro: 0.8062 | Validation F1 Macro : 0.7327
Training F1 Micro: 0.8097 | Validation F1 Micro : 0.7340
Epoch 10, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.7683e-02 | Validation Loss : 2.9768e-02
Training CC : 0.9041 | Validation CC : 0.8950
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8248e-01 | Validation Loss : 5.3264e-01
Training F1 Macro: 0.7861 | Validation F1 Macro : 0.7611
Training F1 Micro: 0.7948 | Validation F1 Micro : 0.7600
Epoch 10, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.8193e-02 | Validation Loss : 2.9865e-02
Training CC : 0.9028 | Validation CC : 0.8946
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0151e-01 | Validation Loss : 5.5175e-01
Training F1 Macro: 0.7940 | Validation F1 Macro : 0.7472
Training F1 Micro: 0.7879 | Validation F1 Micro : 0.7500
Epoch 11, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7389e-02 | Validation Loss : 2.9050e-02
Training CC : 0.9051 | Validation CC : 0.8987
** Classification Losses **
Training Loss : 3.8516e-01 | Validation Loss : 5.7303e-01
Training F1 Macro: 0.7931 | Validation F1 Macro : 0.7532
Training F1 Micro: 0.7894 | Validation F1 Micro : 0.7520
Epoch 11, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.7043e-02 | Validation Loss : 2.8315e-02
Training CC : 0.9073 | Validation CC : 0.9004
** Classification Losses **
Training Loss : 3.9230e-01 | Validation Loss : 5.6460e-01
Training F1 Macro: 0.8086 | Validation F1 Macro : 0.7298
Training F1 Micro: 0.8106 | Validation F1 Micro : 0.7320
Epoch 11, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6812e-02 | Validation Loss : 2.8030e-02
Training CC : 0.9086 | Validation CC : 0.9018
** Classification Losses **
Training Loss : 4.3189e-01 | Validation Loss : 5.6792e-01
Training F1 Macro: 0.7778 | Validation F1 Macro : 0.7561
Training F1 Micro: 0.7718 | Validation F1 Micro : 0.7600
Epoch 11, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5760e-02 | Validation Loss : 2.7589e-02
Training CC : 0.9108 | Validation CC : 0.9031
** Classification Losses **
Training Loss : 4.7740e-01 | Validation Loss : 6.3915e-01
Training F1 Macro: 0.7678 | Validation F1 Macro : 0.7086
Training F1 Micro: 0.7679 | Validation F1 Micro : 0.7120
Epoch 11, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6015e-02 | Validation Loss : 2.7269e-02
Training CC : 0.9111 | Validation CC : 0.9043
** Classification Losses **
Training Loss : 4.3375e-01 | Validation Loss : 5.5022e-01
Training F1 Macro: 0.7734 | Validation F1 Macro : 0.7407
Training F1 Micro: 0.7589 | Validation F1 Micro : 0.7420
Epoch 12, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5989e-02 | Validation Loss : 2.7357e-02
Training CC : 0.9115 | Validation CC : 0.9040
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2214e-01 | Validation Loss : 5.4939e-01
Training F1 Macro: 0.8344 | Validation F1 Macro : 0.7521
Training F1 Micro: 0.8325 | Validation F1 Micro : 0.7560
Epoch 12, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5354e-02 | Validation Loss : 2.7564e-02
Training CC : 0.9125 | Validation CC : 0.9032
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.4841e-01 | Validation Loss : 5.7282e-01
Training F1 Macro: 0.8364 | Validation F1 Macro : 0.7486
Training F1 Micro: 0.8393 | Validation F1 Micro : 0.7460
Epoch 12, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.5480e-02 | Validation Loss : 2.7844e-02
Training CC : 0.9120 | Validation CC : 0.9022
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5476e-01 | Validation Loss : 5.8279e-01
Training F1 Macro: 0.7604 | Validation F1 Macro : 0.7339
Training F1 Micro: 0.7595 | Validation F1 Micro : 0.7320
Epoch 12, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6030e-02 | Validation Loss : 2.8130e-02
Training CC : 0.9105 | Validation CC : 0.9011
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6017e-01 | Validation Loss : 5.6276e-01
Training F1 Macro: 0.8350 | Validation F1 Macro : 0.7553
Training F1 Micro: 0.8416 | Validation F1 Micro : 0.7600
Epoch 12, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.6058e-02 | Validation Loss : 2.8377e-02
Training CC : 0.9100 | Validation CC : 0.9002
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.3337e-01 | Validation Loss : 5.7398e-01
Training F1 Macro: 0.7421 | Validation F1 Macro : 0.7417
Training F1 Micro: 0.7357 | Validation F1 Micro : 0.7480
Epoch 13, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.6073e-02 | Validation Loss : 2.7467e-02
Training CC : 0.9105 | Validation CC : 0.9040
** Classification Losses **
Training Loss : 4.1260e-01 | Validation Loss : 5.3413e-01
Training F1 Macro: 0.7774 | Validation F1 Macro : 0.7369
Training F1 Micro: 0.7721 | Validation F1 Micro : 0.7460
Epoch 13, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.5260e-02 | Validation Loss : 2.7054e-02
Training CC : 0.9129 | Validation CC : 0.9052
** Classification Losses **
Training Loss : 3.2516e-01 | Validation Loss : 5.3940e-01
Training F1 Macro: 0.8232 | Validation F1 Macro : 0.7491
Training F1 Micro: 0.8166 | Validation F1 Micro : 0.7560
Epoch 13, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4910e-02 | Validation Loss : 2.6851e-02
Training CC : 0.9141 | Validation CC : 0.9059
** Classification Losses **
Training Loss : 3.4492e-01 | Validation Loss : 6.1759e-01
Training F1 Macro: 0.8049 | Validation F1 Macro : 0.7021
Training F1 Micro: 0.8028 | Validation F1 Micro : 0.7040
Epoch 13, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4723e-02 | Validation Loss : 2.6553e-02
Training CC : 0.9151 | Validation CC : 0.9072
** Classification Losses **
Training Loss : 4.3373e-01 | Validation Loss : 5.9575e-01
Training F1 Macro: 0.7926 | Validation F1 Macro : 0.7307
Training F1 Micro: 0.7905 | Validation F1 Micro : 0.7320
Epoch 13, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4457e-02 | Validation Loss : 2.6266e-02
Training CC : 0.9160 | Validation CC : 0.9081
** Classification Losses **
Training Loss : 3.6132e-01 | Validation Loss : 6.5195e-01
Training F1 Macro: 0.8150 | Validation F1 Macro : 0.7049
Training F1 Micro: 0.8054 | Validation F1 Micro : 0.7080
Epoch 14, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4087e-02 | Validation Loss : 2.6354e-02
Training CC : 0.9170 | Validation CC : 0.9078
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1288e-01 | Validation Loss : 5.7492e-01
Training F1 Macro: 0.7663 | Validation F1 Macro : 0.7515
Training F1 Micro: 0.7758 | Validation F1 Micro : 0.7480
Epoch 14, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4281e-02 | Validation Loss : 2.6580e-02
Training CC : 0.9164 | Validation CC : 0.9069
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.9865e-01 | Validation Loss : 5.9898e-01
Training F1 Macro: 0.7232 | Validation F1 Macro : 0.7165
Training F1 Micro: 0.7167 | Validation F1 Micro : 0.7240
Epoch 14, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4667e-02 | Validation Loss : 2.6841e-02
Training CC : 0.9153 | Validation CC : 0.9059
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8320e-01 | Validation Loss : 5.9755e-01
Training F1 Macro: 0.7719 | Validation F1 Macro : 0.7258
Training F1 Micro: 0.7854 | Validation F1 Micro : 0.7300
Epoch 14, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4651e-02 | Validation Loss : 2.7134e-02
Training CC : 0.9150 | Validation CC : 0.9048
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.5236e-01 | Validation Loss : 5.8327e-01
Training F1 Macro: 0.7488 | Validation F1 Macro : 0.7195
Training F1 Micro: 0.7541 | Validation F1 Micro : 0.7200
Epoch 14, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4943e-02 | Validation Loss : 2.7282e-02
Training CC : 0.9140 | Validation CC : 0.9042
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2296e-01 | Validation Loss : 5.9588e-01
Training F1 Macro: 0.8156 | Validation F1 Macro : 0.7252
Training F1 Micro: 0.8176 | Validation F1 Micro : 0.7220
Epoch 15, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4853e-02 | Validation Loss : 2.6615e-02
Training CC : 0.9146 | Validation CC : 0.9069
** Classification Losses **
Training Loss : 3.7739e-01 | Validation Loss : 6.0207e-01
Training F1 Macro: 0.7903 | Validation F1 Macro : 0.7373
Training F1 Micro: 0.7850 | Validation F1 Micro : 0.7400
Epoch 15, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4511e-02 | Validation Loss : 2.6230e-02
Training CC : 0.9162 | Validation CC : 0.9082
** Classification Losses **
Training Loss : 3.7205e-01 | Validation Loss : 5.7920e-01
Training F1 Macro: 0.7944 | Validation F1 Macro : 0.7345
Training F1 Micro: 0.7964 | Validation F1 Micro : 0.7340
Epoch 15, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.4118e-02 | Validation Loss : 2.6121e-02
Training CC : 0.9174 | Validation CC : 0.9087
** Classification Losses **
Training Loss : 4.2255e-01 | Validation Loss : 6.0762e-01
Training F1 Macro: 0.7710 | Validation F1 Macro : 0.7177
Training F1 Micro: 0.7640 | Validation F1 Micro : 0.7240
Epoch 15, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3849e-02 | Validation Loss : 2.5984e-02
Training CC : 0.9182 | Validation CC : 0.9093
** Classification Losses **
Training Loss : 3.2580e-01 | Validation Loss : 5.8944e-01
Training F1 Macro: 0.8118 | Validation F1 Macro : 0.7086
Training F1 Micro: 0.8348 | Validation F1 Micro : 0.7140
Epoch 15, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3572e-02 | Validation Loss : 2.5803e-02
Training CC : 0.9189 | Validation CC : 0.9099
** Classification Losses **
Training Loss : 3.5069e-01 | Validation Loss : 5.8212e-01
Training F1 Macro: 0.7943 | Validation F1 Macro : 0.7342
Training F1 Micro: 0.7981 | Validation F1 Micro : 0.7420
Epoch 16, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3613e-02 | Validation Loss : 2.5850e-02
Training CC : 0.9191 | Validation CC : 0.9097
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2581e-01 | Validation Loss : 5.6958e-01
Training F1 Macro: 0.7686 | Validation F1 Macro : 0.7466
Training F1 Micro: 0.7764 | Validation F1 Micro : 0.7480
Epoch 16, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3748e-02 | Validation Loss : 2.5979e-02
Training CC : 0.9187 | Validation CC : 0.9092
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5397e-01 | Validation Loss : 5.8500e-01
Training F1 Macro: 0.8246 | Validation F1 Macro : 0.7383
Training F1 Micro: 0.8221 | Validation F1 Micro : 0.7380
Epoch 16, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3640e-02 | Validation Loss : 2.6102e-02
Training CC : 0.9187 | Validation CC : 0.9087
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5209e-01 | Validation Loss : 5.3993e-01
Training F1 Macro: 0.8365 | Validation F1 Macro : 0.7562
Training F1 Micro: 0.8282 | Validation F1 Micro : 0.7500
Epoch 16, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3654e-02 | Validation Loss : 2.6204e-02
Training CC : 0.9185 | Validation CC : 0.9083
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.2306e-01 | Validation Loss : 5.9012e-01
Training F1 Macro: 0.8290 | Validation F1 Macro : 0.7462
Training F1 Micro: 0.8217 | Validation F1 Micro : 0.7360
Epoch 16, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3984e-02 | Validation Loss : 2.6330e-02
Training CC : 0.9177 | Validation CC : 0.9078
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7217e-01 | Validation Loss : 5.5883e-01
Training F1 Macro: 0.7996 | Validation F1 Macro : 0.7526
Training F1 Micro: 0.7946 | Validation F1 Micro : 0.7560
Epoch 17, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3664e-02 | Validation Loss : 2.5872e-02
Training CC : 0.9186 | Validation CC : 0.9095
** Classification Losses **
Training Loss : 4.1892e-01 | Validation Loss : 5.6008e-01
Training F1 Macro: 0.7996 | Validation F1 Macro : 0.7399
Training F1 Micro: 0.7995 | Validation F1 Micro : 0.7400
Epoch 17, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3579e-02 | Validation Loss : 2.5746e-02
Training CC : 0.9194 | Validation CC : 0.9103
** Classification Losses **
Training Loss : 3.8706e-01 | Validation Loss : 5.6554e-01
Training F1 Macro: 0.7938 | Validation F1 Macro : 0.7530
Training F1 Micro: 0.7904 | Validation F1 Micro : 0.7560
Epoch 17, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3274e-02 | Validation Loss : 2.5757e-02
Training CC : 0.9203 | Validation CC : 0.9099
** Classification Losses **
Training Loss : 3.8919e-01 | Validation Loss : 6.1677e-01
Training F1 Macro: 0.7979 | Validation F1 Macro : 0.7204
Training F1 Micro: 0.8018 | Validation F1 Micro : 0.7220
Epoch 17, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3720e-02 | Validation Loss : 2.5585e-02
Training CC : 0.9196 | Validation CC : 0.9107
** Classification Losses **
Training Loss : 3.5886e-01 | Validation Loss : 5.9146e-01
Training F1 Macro: 0.8076 | Validation F1 Macro : 0.7309
Training F1 Micro: 0.8005 | Validation F1 Micro : 0.7340
Epoch 17, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3193e-02 | Validation Loss : 2.5262e-02
Training CC : 0.9208 | Validation CC : 0.9118
** Classification Losses **
Training Loss : 4.2617e-01 | Validation Loss : 6.1443e-01
Training F1 Macro: 0.7662 | Validation F1 Macro : 0.7182
Training F1 Micro: 0.7765 | Validation F1 Micro : 0.7140
Epoch 18, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2644e-02 | Validation Loss : 2.5364e-02
Training CC : 0.9221 | Validation CC : 0.9114
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.0901e-01 | Validation Loss : 5.9489e-01
Training F1 Macro: 0.8669 | Validation F1 Macro : 0.7396
Training F1 Micro: 0.8633 | Validation F1 Micro : 0.7420
Epoch 18, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2811e-02 | Validation Loss : 2.5575e-02
Training CC : 0.9217 | Validation CC : 0.9106
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0685e-01 | Validation Loss : 5.7805e-01
Training F1 Macro: 0.7838 | Validation F1 Macro : 0.7359
Training F1 Micro: 0.7867 | Validation F1 Micro : 0.7380
Epoch 18, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2928e-02 | Validation Loss : 2.5910e-02
Training CC : 0.9211 | Validation CC : 0.9093
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0947e-01 | Validation Loss : 5.9880e-01
Training F1 Macro: 0.7655 | Validation F1 Macro : 0.7152
Training F1 Micro: 0.7587 | Validation F1 Micro : 0.7180
Epoch 18, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.3467e-02 | Validation Loss : 2.6437e-02
Training CC : 0.9195 | Validation CC : 0.9074
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1168e-01 | Validation Loss : 5.8386e-01
Training F1 Macro: 0.7726 | Validation F1 Macro : 0.7352
Training F1 Micro: 0.7732 | Validation F1 Micro : 0.7380
Epoch 18, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.4003e-02 | Validation Loss : 2.6869e-02
Training CC : 0.9175 | Validation CC : 0.9057
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3369e-01 | Validation Loss : 5.8088e-01
Training F1 Macro: 0.8248 | Validation F1 Macro : 0.7394
Training F1 Micro: 0.8259 | Validation F1 Micro : 0.7440
Epoch 19, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3565e-02 | Validation Loss : 2.5969e-02
Training CC : 0.9190 | Validation CC : 0.9096
** Classification Losses **
Training Loss : 4.7856e-01 | Validation Loss : 5.5479e-01
Training F1 Macro: 0.7504 | Validation F1 Macro : 0.7659
Training F1 Micro: 0.7549 | Validation F1 Micro : 0.7640
Epoch 19, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3875e-02 | Validation Loss : 2.6010e-02
Training CC : 0.9196 | Validation CC : 0.9096
** Classification Losses **
Training Loss : 4.4531e-01 | Validation Loss : 5.9607e-01
Training F1 Macro: 0.7422 | Validation F1 Macro : 0.7369
Training F1 Micro: 0.7695 | Validation F1 Micro : 0.7380
Epoch 19, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3655e-02 | Validation Loss : 2.5316e-02
Training CC : 0.9201 | Validation CC : 0.9116
** Classification Losses **
Training Loss : 4.1881e-01 | Validation Loss : 6.2135e-01
Training F1 Macro: 0.7593 | Validation F1 Macro : 0.7008
Training F1 Micro: 0.7664 | Validation F1 Micro : 0.7040
Epoch 19, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2818e-02 | Validation Loss : 2.5226e-02
Training CC : 0.9220 | Validation CC : 0.9121
** Classification Losses **
Training Loss : 4.2854e-01 | Validation Loss : 5.5254e-01
Training F1 Macro: 0.7875 | Validation F1 Macro : 0.7561
Training F1 Micro: 0.7902 | Validation F1 Micro : 0.7580
Epoch 19, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2424e-02 | Validation Loss : 2.5003e-02
Training CC : 0.9231 | Validation CC : 0.9127
** Classification Losses **
Training Loss : 3.9152e-01 | Validation Loss : 6.1602e-01
Training F1 Macro: 0.7799 | Validation F1 Macro : 0.7036
Training F1 Micro: 0.7818 | Validation F1 Micro : 0.7080
Epoch 20, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2311e-02 | Validation Loss : 2.5099e-02
Training CC : 0.9235 | Validation CC : 0.9124
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9169e-01 | Validation Loss : 6.0477e-01
Training F1 Macro: 0.7948 | Validation F1 Macro : 0.7280
Training F1 Micro: 0.7956 | Validation F1 Micro : 0.7320
Epoch 20, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2334e-02 | Validation Loss : 2.5213e-02
Training CC : 0.9233 | Validation CC : 0.9120
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0285e-01 | Validation Loss : 6.3698e-01
Training F1 Macro: 0.7737 | Validation F1 Macro : 0.6868
Training F1 Micro: 0.7862 | Validation F1 Micro : 0.6900
Epoch 20, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2598e-02 | Validation Loss : 2.5340e-02
Training CC : 0.9226 | Validation CC : 0.9115
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.9110e-01 | Validation Loss : 5.5746e-01
Training F1 Macro: 0.7975 | Validation F1 Macro : 0.7283
Training F1 Micro: 0.8000 | Validation F1 Micro : 0.7300
Epoch 20, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2841e-02 | Validation Loss : 2.5441e-02
Training CC : 0.9219 | Validation CC : 0.9111
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.5420e-01 | Validation Loss : 5.6262e-01
Training F1 Macro: 0.8187 | Validation F1 Macro : 0.7247
Training F1 Micro: 0.8175 | Validation F1 Micro : 0.7320
Epoch 20, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2634e-02 | Validation Loss : 2.5474e-02
Training CC : 0.9222 | Validation CC : 0.9110
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.7546e-01 | Validation Loss : 5.8279e-01
Training F1 Macro: 0.7743 | Validation F1 Macro : 0.7118
Training F1 Micro: 0.7712 | Validation F1 Micro : 0.7160
Epoch 21, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2914e-02 | Validation Loss : 2.5272e-02
Training CC : 0.9219 | Validation CC : 0.9117
** Classification Losses **
Training Loss : 3.2307e-01 | Validation Loss : 5.8925e-01
Training F1 Macro: 0.8328 | Validation F1 Macro : 0.7242
Training F1 Micro: 0.8364 | Validation F1 Micro : 0.7260
Epoch 21, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2296e-02 | Validation Loss : 2.5089e-02
Training CC : 0.9235 | Validation CC : 0.9125
** Classification Losses **
Training Loss : 3.4980e-01 | Validation Loss : 6.5725e-01
Training F1 Macro: 0.8176 | Validation F1 Macro : 0.6686
Training F1 Micro: 0.8278 | Validation F1 Micro : 0.6720
Epoch 21, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2895e-02 | Validation Loss : 2.4877e-02
Training CC : 0.9228 | Validation CC : 0.9133
** Classification Losses **
Training Loss : 3.3494e-01 | Validation Loss : 6.6410e-01
Training F1 Macro: 0.7934 | Validation F1 Macro : 0.6833
Training F1 Micro: 0.8029 | Validation F1 Micro : 0.6820
Epoch 21, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1946e-02 | Validation Loss : 2.4871e-02
Training CC : 0.9247 | Validation CC : 0.9134
** Classification Losses **
Training Loss : 3.4303e-01 | Validation Loss : 5.9066e-01
Training F1 Macro: 0.8213 | Validation F1 Macro : 0.7236
Training F1 Micro: 0.8290 | Validation F1 Micro : 0.7280
Epoch 21, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2240e-02 | Validation Loss : 2.5013e-02
Training CC : 0.9245 | Validation CC : 0.9130
** Classification Losses **
Training Loss : 4.0452e-01 | Validation Loss : 5.6541e-01
Training F1 Macro: 0.7742 | Validation F1 Macro : 0.7399
Training F1 Micro: 0.7915 | Validation F1 Micro : 0.7440
Epoch 22, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1967e-02 | Validation Loss : 2.5087e-02
Training CC : 0.9247 | Validation CC : 0.9127
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1840e-01 | Validation Loss : 6.4048e-01
Training F1 Macro: 0.7929 | Validation F1 Macro : 0.7100
Training F1 Micro: 0.7910 | Validation F1 Micro : 0.7100
Epoch 22, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.1935e-02 | Validation Loss : 2.5163e-02
Training CC : 0.9246 | Validation CC : 0.9124
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6778e-01 | Validation Loss : 6.3753e-01
Training F1 Macro: 0.8268 | Validation F1 Macro : 0.6820
Training F1 Micro: 0.8257 | Validation F1 Micro : 0.6940
Epoch 22, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2107e-02 | Validation Loss : 2.5249e-02
Training CC : 0.9242 | Validation CC : 0.9121
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.7901e-01 | Validation Loss : 6.1646e-01
Training F1 Macro: 0.7844 | Validation F1 Macro : 0.7042
Training F1 Micro: 0.7881 | Validation F1 Micro : 0.6980
Epoch 22, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2553e-02 | Validation Loss : 2.5416e-02
Training CC : 0.9232 | Validation CC : 0.9114
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.3607e-01 | Validation Loss : 5.7763e-01
Training F1 Macro: 0.8420 | Validation F1 Macro : 0.7309
Training F1 Micro: 0.8319 | Validation F1 Micro : 0.7240
Epoch 22, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2406e-02 | Validation Loss : 2.5654e-02
Training CC : 0.9231 | Validation CC : 0.9105
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.1586e-01 | Validation Loss : 5.9739e-01
Training F1 Macro: 0.8552 | Validation F1 Macro : 0.7162
Training F1 Micro: 0.8444 | Validation F1 Micro : 0.7080
Epoch 23, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2377e-02 | Validation Loss : 2.5133e-02
Training CC : 0.9234 | Validation CC : 0.9124
** Classification Losses **
Training Loss : 4.0771e-01 | Validation Loss : 5.8509e-01
Training F1 Macro: 0.8009 | Validation F1 Macro : 0.7322
Training F1 Micro: 0.7929 | Validation F1 Micro : 0.7240
Epoch 23, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2212e-02 | Validation Loss : 2.4984e-02
Training CC : 0.9243 | Validation CC : 0.9132
** Classification Losses **
Training Loss : 3.8192e-01 | Validation Loss : 6.2512e-01
Training F1 Macro: 0.8153 | Validation F1 Macro : 0.7167
Training F1 Micro: 0.8067 | Validation F1 Micro : 0.7120
Epoch 23, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1892e-02 | Validation Loss : 2.4682e-02
Training CC : 0.9252 | Validation CC : 0.9139
** Classification Losses **
Training Loss : 3.3353e-01 | Validation Loss : 5.9043e-01
Training F1 Macro: 0.8218 | Validation F1 Macro : 0.7290
Training F1 Micro: 0.8309 | Validation F1 Micro : 0.7200
Epoch 23, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.3166e-02 | Validation Loss : 2.5073e-02
Training CC : 0.9232 | Validation CC : 0.9124
** Classification Losses **
Training Loss : 3.6060e-01 | Validation Loss : 5.7179e-01
Training F1 Macro: 0.8320 | Validation F1 Macro : 0.7448
Training F1 Micro: 0.8298 | Validation F1 Micro : 0.7360
Epoch 23, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2762e-02 | Validation Loss : 2.5596e-02
Training CC : 0.9235 | Validation CC : 0.9114
** Classification Losses **
Training Loss : 3.8539e-01 | Validation Loss : 6.3338e-01
Training F1 Macro: 0.8018 | Validation F1 Macro : 0.7057
Training F1 Micro: 0.8036 | Validation F1 Micro : 0.7040
Epoch 24, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2463e-02 | Validation Loss : 2.5605e-02
Training CC : 0.9232 | Validation CC : 0.9113
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.2555e-01 | Validation Loss : 6.2639e-01
Training F1 Macro: 0.7901 | Validation F1 Macro : 0.7127
Training F1 Micro: 0.7789 | Validation F1 Micro : 0.7060
Epoch 24, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2465e-02 | Validation Loss : 2.5677e-02
Training CC : 0.9232 | Validation CC : 0.9110
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.8771e-01 | Validation Loss : 5.9812e-01
Training F1 Macro: 0.8100 | Validation F1 Macro : 0.7108
Training F1 Micro: 0.7974 | Validation F1 Micro : 0.7020
Epoch 24, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2506e-02 | Validation Loss : 2.5757e-02
Training CC : 0.9230 | Validation CC : 0.9106
** Classification Losses ** <---- Now Optimizing
Training Loss : 3.6696e-01 | Validation Loss : 6.4113e-01
Training F1 Macro: 0.7950 | Validation F1 Macro : 0.7077
Training F1 Micro: 0.7935 | Validation F1 Micro : 0.7000
Epoch 24, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2631e-02 | Validation Loss : 2.5856e-02
Training CC : 0.9226 | Validation CC : 0.9102
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.0021e-01 | Validation Loss : 6.3366e-01
Training F1 Macro: 0.8167 | Validation F1 Macro : 0.7177
Training F1 Micro: 0.8101 | Validation F1 Micro : 0.7120
Epoch 24, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses **
Training Loss : 2.2723e-02 | Validation Loss : 2.5916e-02
Training CC : 0.9223 | Validation CC : 0.9099
** Classification Losses ** <---- Now Optimizing
Training Loss : 4.1395e-01 | Validation Loss : 6.5980e-01
Training F1 Macro: 0.7914 | Validation F1 Macro : 0.6846
Training F1 Micro: 0.7793 | Validation F1 Micro : 0.6760
Epoch 25, of 25 >-*-< Mini Epoch 1 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2519e-02 | Validation Loss : 2.5346e-02
Training CC : 0.9232 | Validation CC : 0.9118
** Classification Losses **
Training Loss : 4.3537e-01 | Validation Loss : 5.8877e-01
Training F1 Macro: 0.7633 | Validation F1 Macro : 0.7264
Training F1 Micro: 0.7626 | Validation F1 Micro : 0.7180
Epoch 25, of 25 >-*-< Mini Epoch 2 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.2200e-02 | Validation Loss : 2.5005e-02
Training CC : 0.9243 | Validation CC : 0.9128
** Classification Losses **
Training Loss : 3.7159e-01 | Validation Loss : 6.0019e-01
Training F1 Macro: 0.8473 | Validation F1 Macro : 0.7259
Training F1 Micro: 0.8380 | Validation F1 Micro : 0.7240
Epoch 25, of 25 >-*-< Mini Epoch 3 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1534e-02 | Validation Loss : 2.4663e-02
Training CC : 0.9262 | Validation CC : 0.9139
** Classification Losses **
Training Loss : 3.4443e-01 | Validation Loss : 6.3834e-01
Training F1 Macro: 0.8323 | Validation F1 Macro : 0.7071
Training F1 Micro: 0.8182 | Validation F1 Micro : 0.7060
Epoch 25, of 25 >-*-< Mini Epoch 4 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1617e-02 | Validation Loss : 2.4612e-02
Training CC : 0.9265 | Validation CC : 0.9144
** Classification Losses **
Training Loss : 4.1844e-01 | Validation Loss : 6.9838e-01
Training F1 Macro: 0.7962 | Validation F1 Macro : 0.6840
Training F1 Micro: 0.7883 | Validation F1 Micro : 0.6780
Epoch 25, of 25 >-*-< Mini Epoch 5 of 5 >-*-< Learning rate 1.000e-03
** Autoencoding Losses ** <---- Now Optimizing
Training Loss : 2.1830e-02 | Validation Loss : 2.4502e-02
Training CC : 0.9264 | Validation CC : 0.9147
** Classification Losses **
Training Loss : 4.5044e-01 | Validation Loss : 6.1236e-01
Training F1 Macro: 0.7603 | Validation F1 Macro : 0.7162
Training F1 Micro: 0.7709 | Validation F1 Micro : 0.7140
We pass over the test data again and collect stuff so we can inspect what is happening.
[7]:
bagged_model = baggins.autoencoder_labeling_model_baggin(autoencoders)
[8]:
import tqdm
results = []
pres = []
sresults = []
spres = []
latent = []
true_lbl = []
inp_img = []
for batch in tqdm.tqdm(test_loader):
true_lbl.append(batch[1])
with torch.no_grad():
inp_img.append(batch[0].cpu())
res, sres, ps, sps = bagged_model(batch[0], "cuda:0", True)
results.append(res.cpu())
pres.append(ps.cpu())
sresults.append(sres.cpu())
spres.append(sps.cpu())
results = torch.cat(results, dim=0)
pres = torch.cat(pres,dim=0)
sresults = torch.cat(sresults, dim=0)
spres = torch.cat(spres,dim=0)
true_lbl = torch.cat(true_lbl, dim=0)
inp_img = torch.cat(inp_img, dim=0)
100%|██████████| 5/5 [00:06<00:00, 1.38s/it]
Lets have a look and see what we get
[9]:
Macro_F1, Micro_F1 = train_scripts.segmentation_metrics(pres, true_lbl[:,0].type(torch.LongTensor))
print(f"Macro F1 on Test Data {Macro_F1: 6.5f}")
print(f"Micro F1 on Test Data {Micro_F1: 6.5f}")
Macro F1 on Test Data 0.92600
Micro F1 on Test Data 0.92454
As you can see, we get a decent F1 score out of these bagged classifiers. Let have a looks at the denoised images out of the autoencoder.
[10]:
count = 0
print("-------- The first 10 images encountered ----------")
for img, simg, p, sp, tlbl, ori in zip(results, sresults, pres, spres, true_lbl, inp_img):
if count < 10:
fig = paic.plot_autoencoder_and_label_results_with_std(
input_img=ori[0].numpy(),
output_img=img[0].numpy(),
std_img=simg[0].numpy(),
p_classification=p.numpy(),
std_p_classification=sp.numpy(),
class_names=["Rect.","Disc","Tri.","Donut"])
plt.show()
count += 1
print("-------- Incorrectly labeled images (10 max) ----------")
count = 0
for img, simg, p, sp, tlbl, ori in zip(results, sresults, pres, spres, true_lbl, inp_img):
ilbl = np.argmax(p.numpy())
if int(tlbl) != int(ilbl):
fig = paic.plot_autoencoder_and_label_results_with_std(
input_img=ori[0].numpy(),
output_img=img[0].numpy(),
std_img=simg[0].numpy(),
p_classification=p.numpy(),
std_p_classification=sp.numpy(),
class_names=["Rect.","Disc","Tri.","Donut"])
plt.show()
count += 1
if count > 10:
break
-------- The first 10 images encountered ----------










-------- Incorrectly labeled images (10 max) ----------











[ ]:
Indices and tables¶
License and legal stuff¶
This software has been developed from funds that originate from the US tax payer and is free for academics. Please have a look at the license agreement for more details. Commercial usage will require some extra steps. Please contact ipo@lbl.gov for more details.
Final thoughts¶
This documentation is far from complete, but have some notebooks as part of the codebase, which could provide a good entry point.
More to come!