## Staggered Attention Mechanism - Long Input Transformers

Transformers are at the heart of NLP in the current scenario. They are making great strides and producing state-of-the-art results in diverse domains ranging from Computer Vision to Graph NNs.

In this post, we will dive into the details of the Staggered Attention Mechanism introduced in the paper Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang Yao Zhao and Peter J. Liu, researchers at Google Brain.

## Huggingface Accelerate to train on multiple GPUs.

PyTorch is a simple and stable framework for building deep learning solutions. Its simplicity offers freedom and control over the complete code.

While using PyTorch is fun 😁, sometimes we have to do a lot of things manually. Some of the things we may want to do are

## DeBerta is the new King!

NLP’s State completely changed when in 2018, researchers from Google open-sourced BERT (Bi-Directional Encoder Representation From Transformers).

## Huggingface 🤗 is all you need for NLP and beyond

Natural Language Processing is one of the fastest-growing fields in Deep Learning. NLP has completely changed since the inception of Transformers. Later on, variants of Transformer architecture where-in only the encoder part was used (BERT) cracked the transfer learning game in NLP. Now, you can download a pre-trained model from the internet which is already trained on huge amounts of data and has the knowledge of language and use it for your downstream tasks with a bit of fine-tuning.

Do you know it is super easy to deploy a Gradio APP on Jarvislabs.ai 🤔 In this post we will explore how to deploy a Marvel character classifier app on Jarvislabs.ai.

## Understanding PyTorch Module

Have you ever wondered 🤔 how PyTorch nn.Module works? I was always curious to understand how the internals work too. Recently I was reading Fast.ai's Deep learning for coders book's 19th chapter, where we learn how to build minimal versions of PyTorch and FastAI modules like

## Managing and Tracking ML experiments

One of the least taught skill in machine learning is how to manage and track machine learning experiments effectively. Once you get out of the shell of beginner-level projects and get into some serious projects/research, experiment tracking and management become one of the most crucial parts of your project.

## PyTorch Lazy modules

While designing DL modules like a classification head, it is required to calculate the input features. PyTorch Lazy modules comes to the rescue by helping us automate it.

## Jarvislabs.ai — Updates and features

Time flies. It's been more than a year, from the time we launched. Thanks to all our early adopters for trusting and supporting us. Without your love, feedback and patience we would have not come this far.

## Longformer — The Long Document Transformer

Transformer-Based Models have become the go-to models in about every NLP task since their inception, but when it comes to long documents they suffer from a drawback of limited tokens. Transformer-Based Models are unable to process long sequences due to their self-attention which scales quadratically with the sequence length.

## Training Large NLP Models Efficiently with DeepSpeed Hugging Face

With the recent advancements in NLP, we are moving towards solving more and more sophisticated problems like Open Domain Question Answering, Empathy in Dialogue Systems, Multi-Modal Problems, etc but with this, the parameters associated with the models have also been rising and have gone to the scale of billions and even Trillions in the largest model Megatron.

## Introduction​

EfficientDet model series was introduced by Google Brain Team in 2020 which turns out to be outperforming almost every detection model of similar size in the majority of the tasks. It utilizes several optimizations. Also, many tweaks in the architecture backbone were introduced including the use of a Bi-directional Feature Pyramid Network [BiFPN] and scaling methods which resulted in the better fusion of features.

## Introduction​

In this article, we will walk through a baseline model for the Jigsaw Rate Severity of Toxic Comments Competition on Kaggle. The goal of the competition is to rank relative ratings of toxicity between comments.

## Introduction​

In this competition, we will be predicting engagement with a shelter pet's profile based on the photograph for that profile. Along with the image of each pet, we are also provided Metadata for them that consists of different features like focus, eyes, etc. We aim to somehow utilize both images as well as tabular data in the best possible way to minimize the error rate.

## Introduction​

In this competition, we will be predicting answers to questions in Hindi and Tamil. The answers are drawn directly from a limited context given to us for each sample. The competition is diverse and unique compared to other competitions currently held on Kaggle focusing on Multilingual Natural Language Understanding (NLU), which makes it difficult and exciting to work with. Hence, the task of this competition is to build a robust model in which you have to generate answers to the questions about some Hindi/Tamil Wikipedia articles provided.

## Introduction​

Most of the modern architectures proposed in Computer Vision use ResNet architecture to benchmark their results. These novel architectures train with improved training strategy, data augmentation, and optimizers.

## Train using multiple GPUs in fastai

Fastai makes training deep learning models on multiple GPUs a lot easier. In this blog, let's look at different approaches to train a model using multiple GPUs.

## Why it is important to understand ResNet?​

ResNets are the backbone behind most of the modern computer vision architectures. For a lot of common problems in computer vision, the go-to architecture is resnet 34. Most of the modern CNN architectures like ResNext, DenseNet are different variants to original resnet architecture. In different subfields of computer vision like object detection, image segmentation resnet plays an important role as a pre-trained backbone.

## Identifying people without Masks - Using Deep Learning/AI

In the last several weeks I saw a lot of posts showcasing demos/products on how to use AI algorithms for recognizing people who are not wearing masks. In this post, I will take you through a very simple approach to show how can you build one yourself and also end by asking few questions that can help in building a product that can be used for production. We will be using PyTorch for this task, but the steps would remain almost the same if you are trying to achieve it using another framework like TensorFlow, Keras, or MxNet.

To build any AI algorithm, the most common approach is to

1. Prepare a labeled dataset.
2. Choose an architecture that suits your needs. Preferably pre-trained based on your use-case.
3. Train the model and test the model.

## Understanding Autoencoders and Variational Autoencoders

In the last few years, computer vision algorithms have been able to do many things. One amazing and dangerous thing it can do also is, generate new images, faces, voices, etc. The evolution of what these algorithms can do and if it is good is for a separate debate.

## PyTorch transfer learning

Transfer learning has become a key component of modern deep learning, both in the fields of CV and NLP. In this post, we will look at how to apply transfer learning for a Computer Vision classification problem. Along the way I will be showing you how to tweak your neural network to achieve better results. We are using PyTorch for this, but the techniques that we learn can be applied across other frameworks too.