The Illustrated SimCLR Framework

Published March 04, 2020 in illustration

In recent years, numerous self-supervised learning methods have been proposed for learning image representations, each getting better than the previous. But, their performance was still below the supervised counterparts. This changed when Chen et. al proposed a new framework in their research paper “SimCLR: A Simple Framework for Contrastive Learning of Visual Representations“. The research paper not only improves upon the previous state-of-the-art self-supervised learning methods but also beats the supervised learning method on ImageNet classification.

In this article, I will explain the key ideas of the framework proposed in the research paper using diagrams.

The Nostalgic Intuition

As a kid, I remember we had to solve such puzzles in our textbook.
Find a Pair Exercise
The way a child would solve it is by looking at the picture of the animal on the left side, know its a cat, then search for a cat on the right side.
Child Matching Animal Pairs

Such exercises were prepared for the child to be able to recognize an object and contrast that to other objects. Can we teach machines in a similar manner?”

It turns out that we can through a technique called Contrastive Learning. It attempts to teach machines to distinguish between similar and dissimilar things. Contrastive Learning Block

Problem Formulation for Machines

To model the above exercise for a machine instead of a child, we see that we require 3 things:

  1. Examples of similar and dissimilar images
    We would require example pairs of images that are similar and images that are different for training a model.
    Pair of similar and dissimilar images
    The supervised school of thought would require a human to manually create such pairs. To automate this, we could leverage self-supervised learning. But how do we formulate it? Manually Labeling pairs of Images
    Self-supervised Approach to Labeling Images

  2. Ability to know what an image represents
    We need some mechanism to get representations that allow the machine to understand an image. Converting Image to Representations

  3. Ability to quantify if two images are similar
    We need some mechanism to compute the similarity of two images. Computing Similarity between Images

The SimCLR Framework Approach

The paper proposes a framework “SimCLR” for modeling the above problem in a self-supervised manner. It blends the concept of Contrastive Learning with a few novel ideas to learn visual representations without human supervision.

Framework

The framework, as the full-form suggests, is very simple. An image is taken and random transformations are applied to it to get a pair of two augmented images xix_i and xjx_j . Each image in that pair is passed through an encoder to get representations. Then a non-linear fully connected layer is applied to get representations z. The task is to maximize the similarity between these two representations ziz_i and zjz_j for the same image. General Architecture of the SimCLR Framework

Step by Step Example

Let’s explore the various components of the framework with an example. Suppose we have a training corpus of millions of unlabeled images. Corpus of millions of images

  1. Self-supervised Formulation [Data Augmentation]
    First, we generate batches of size N from the raw images. Let’s take a batch of size N = 2 for simplicity. In the paper, they use a large batch size of 8192. A single batch of images

The paper defines a random transformation function T that takes an image and applies a combination of random (crop + flip + color jitter + grayscale). Random Augmentation on Image

For each image in this batch, random transformation function is applied to get a pairs of 2 images. Thus, for a batch size of 2, we get 2*N = 2*2 = 4 total images.
Augmenting images in a batch for SimCLR
2. Getting Representations [Base Encoder]

Each augmented image in a pair is passed through an encoder to get image representations. The encoder used is generic and replaceable with other architectures. The two encoders shown below have shared weights and we get vectors hih_i and hjh_j . Encoder part of SimCLR

In the paper, the authors used ResNet-50 architecture as the ConvNet encoder. The output is a 2048-dimensional vector h. ResNet-50 as encoder in SimCLR 3. Projection Head
The representations hih_i and hjh_j of the two augmented images are then passed through a series of non-linear Dense -> Relu -> Dense layers to apply non-linear transformation and project it into a representation ziz_i and zjz_j . This is denoted by g(.)g(.) in the paper and called projection head. Projection Head Component of SimCLR 4. Tuning Model: [Bringing similar closer]
Thus, for each augmented image in the batch, we get embedding vectors zz for it. Projecting image to embedding vectors

From these embedding, we calculate the loss in following steps:

a. Calculation of Cosine Similarity

Now, the similarity between two augmented versions of an image is calculated using cosine similarity. For two augmented images xix_i and xjx_j , the cosine similarity is calculated on its projected representations ziz_i and zjz_j . Cosine similarity between image embeddings

si,j=ziTzj(τzizj) s_{i,j} = \frac{ \textcolor{#ff7070}{z_{i}^{T}z_{j}} }{(\tau ||\textcolor{#ff7070}{z_{i}}|| ||\textcolor{#ff7070}{z_{j}}||)}

where

  • τ\tau is the adjustable temperature parameter. It can scale the inputs and widen the range [-1, 1] of cosine similarity.
  • zi||z_{i}|| is the norm of the vector

The pairwise cosine similarity between each augmented image in a batch is calculated using the above formula. As shown in the figure, in an ideal case, the similarities between augmented images of cats will be high while the similarity between cat and elephant images will be lower. Pairwise cosine similarity between 4 images

b. Loss Calculation
SimCLR uses a contrastive loss called “NT-Xent” (Normalized Temperature-Scaled Cross-Entropy Loss). Let see intuitively how it works.

First, the augmented pairs in the batch are taken one by one. Example of a single batch in SimCLR Next, we apply the softmax function to get the probability of these two images being similar.
Softmax Calculation on Image Similarities This softmax calculation is equivalent to getting the probability of the second augmented cat image being the most similar to the first cat image in the pair. Here, all remaining images in the batch are sampled as dissimilar image (negative pair). Thus, we don’t need specialized architecture, memory bank or queue need by previous approaches like InstDisc, MoCo or PIRL. Interpretation of Softmax Function

Then, the loss is calculated for a pair by taking the negative of the log of above calculation. This formulation is the Noise Contrastive Estimation(NCE) Loss. l(i,j)=logexp(si,j)k=12Nl[k!=i]exp(si,k) l(i, j) = -log\frac{exp(s_{i, j})}{ \sum_{k=1}^{2N} l_{[k!= i]} exp(s_{i, k})}

Calculation of Loss from softmax

We calculate the loss for the same pair a second time as well where the positions of the images are interchanged. Calculation of loss for exchanged pairs of images

Finally, we compute loss over all the pairs in the batch of size N=2 and take an average. L=12Nk=1N[l(2k1,2k)+l(2k,2k1)] L = \frac{1}{ 2\textcolor{#2196f3}{N} } \sum_{k=1}^{N} [l(2k-1, 2k) + l(2k, 2k-1)]

Total loss in SimCLR

Based on the loss, the encoder and projection head representations improves over time and the representations obtained place similar images closer in the space.

Downstream Tasks

Once the model is trained on the contrastive learning task, it can be used for transfer learning. In this, the representations from the encoder are used instead of representations obtained from the projection head. These representations can be used for downstream tasks like ImageNet Classification. Using SimCLR for downstream tasks

Objective Results

SimCLR outperformed previous self-supervised methods on ImageNet. The below image shows top-1 accuracy of linear classifiers trained on representations learned with different self-supervised methods on ImageNet. The gray cross is supervised ResNet50 and SimCLR is shown in bold. Performance of SimCLR on ImageNet

Source: SimCLR paper

  • On ImageNet ILSVRC-2012, it achieves 76.5% top-1 accuracy which is 7% improvement over previous SOTA self-supervised method Contrastive Predictive Coding and on-par with supervised ResNet50.
  • When trained on 1% of labels, it achieves 85.8% top-5 accuracy outperforming AlexNet with 100x fewer labels

Conclusion

Thus, SimCLR provides a strong framework for doing further research in this direction and improve the state of self-supervised learning for Computer Vision.

Citation Info (BibTex)

If you found this blog post useful, please consider citing it as:

@misc{chaudhary2020simclr,
  title   = {The Illustrated SimCLR Framework},
  author  = {Amit Chaudhary},
  year    = 2020,
  note    = {\url{https://amitness.com/2020/03/illustrated-simclr}}
}

References

Share on: Twitter | Facebook | Email