View profile

Weekly Research Newsletter

Weekly Research Newsletter
7th February, 2021
We are excited to share this week’s picks for the research newsletter. We hope you’ll enjoy reading them over the weekend. We are improving the look of the newsletter a bit using Revue, hope you like it.

Learning Transferable Visual Models From Natural Language Supervision
By Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh et al.
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on.
A Statistician Teaches Deep Learning
By G. Jogesh Babu, David Banks, Hyunsoon Cho, David Han, Hailin Sang, Shouyi Wang
Deep learning (DL) has gained much attention and become increasingly popular in modern data science. Computer scientists led the way in developing deep learning techniques, so the ideas and perspectives can seem alien to statisticians. Nonetheless, it is important that statisticians become involved – many of our students need this expertise for their careers. In this paper, developed as part of a program on DL held at the Statistical and Applied Mathematical Sciences Institute, we address this culture gap and provide tips on how to teach deep learning to statistics graduate students. After some background, we list ways in which DL and statistical perspectives differ, provide a recommended syllabus that evolved from teaching two iterations of a DL graduate course, offer examples of suggested homework assignments, give an annotated list of teaching resources, and discuss DL in the context of two research areas.
Did you enjoy this issue?
Priyansi, Junaid Rahim and Biswaroop Bhattacharjee

An opt-in weekly newsletter for the undergraduate research enthusiasts in KIIT. We intend to share interesting research articles and start conversations about the latest ideas in artificial intelligence, computer science and mathematics.

Every Friday, all subscribers will receive some research articles straight in their inbox. The papers will usually be a mix of that week’s popular research articles, review articles and some seminal papers in the various fields mentioned above.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue