Jun 24, 2024

Optimizing Neural Networks

Designing an effective neural network is a complex task that involves selecting the right combination of layers, number of neurons, dropout rates, and other hyperparameters. While doing guess work to find suitable architectures is common, more often than not gives suboptimal solutions. So in this little experiment, we employed simulated annealing, to approximate optimum architectures.

The goal here is to classify images from the CIFAR-10 dataset, which contains 60,000 32x32 color images in 10 classes. Instead of manually tuning the neural network architecture, we use simulated annealing to find the best combination of hyperparameters. Then we compare the performance of 15 randomly selected architectures with 15 architectures optimized using simulated annealing.

Our search space includes the following hyperparameters, which have been already reduced to a set of common values and ranges, making the exploration easier for this experiment.


Number of convolutional layers: 1 to 3
Number of dense layers: 1 or 2
Kernel size: (3,3) or (5,5)
Dropout rate: 0.25 or 0.5
Number of units in dense layers: 128, 256, or 512 

Implementation

The core of the experiment uses simulated annealing, where the model is built and trained with a randomly selected set of hyperparameters, evaluated on the test set, and its accuracy is recorded. The objective function in this process is the validation accuracy. During the annealing process, models are trained for just one epoch to quickly evaluate different configurations. The probability of accepting a new set of hyperparameters depends on both the improvement in accuracy and a temperature parameter that gradually decreases over time.

As the temperature decreases, the algorithm becomes less likely to accept worse-performing configurations. This process continues until the temperature falls below a predefined threshold of 1. The best-performing set of hyperparameters discovered during this process is then used to build the final model.

The final model, configured with the optimal hyperparameters identified through simulated annealing, is then trained more extensively for 25 epochs. To provide a comparative baseline, another model is built using a totally random set of hyperparameters from the search space and is also trained for 25 epochs.

Results

After comparing the performance of 15 randomly chosen CNN architectures with 15 CNN architectures optimized using simulated annealing, the results show about a 10% better validation accuracy for the fully trained model optimized with simulated annealing.

However, it's important to bear in mind the significant computational cost of a metaheuristic like simulated annealing. The tradeoffs between computational expense and model performance should be considered on a case-by-case basis. Despite these tradeoffs, it's evident that systematically approaching architecture construction has a clear advantage over random guesswork.

This is just a basic experiment for neural network design, but stay tuned for further iterations and new experiments!!