Building a Deep Learning model that inputs two images - content and style, and outputs an image that looks like the content image in the style of the style reference image.
- The project commenced with an exploration of deep learning and artificial neural networks.
- Created a simple neural network from scratch using NumPy for classifying digits in the MNIST dataset.
- Enhanced our models by learning optimization techniques and transitioned to PyTorch for efficiency.
- Learnt the working of CNN and studied various CNN architecture
- Implemented our own custom CNN architecture with a high level of accuracy for digit classification on the MNIST dataset.
- Finally, the culmination of our project involved implementing Neural Style Transfer using the PyTorch framework as the final task
The Algorithm Neural style transfer is an optimization technique used to take two images, a content image and a style reference image (such as an artwork by a famous painter)— and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image.
Procedure
VGG19 network is used for Neural Style transfer. VGG-19 is a convolutional neural network that is trained on more than a million images from the ImageNet database. The network is 19 layers deep and trained on millions of images.

The net loss for style transfer is defined as :
Lₜₒₜₐₗ is the total loss, L 𝒸ₒₙₜₑₙₜ is the content loss of all the intermediate layers and Lₛₜᵧₗₑ is the style loss of all the intermediate layers. Here, α and β are the weighting coefficients of the content and the style loss, respectively
Calculating content loss means how similar is the randomly generated noisy image(G) to the content image(C).In order to calculate content loss :

It is the function of style reference image and genrated image . It mesures how similar is style of the genrated image to the style of style image.

