Weekly Research Newsletter - KIIT

By Priyansi, Junaid Rahim and Biswaroop Bhattacharjee

Weekly Research Newsletter - KIIT - Issue #6

#6・
Weekly Research Newsletter - KIIT
430

subscribers

18

issues

Weekly Research Newsletter - KIIT - Issue #6
21th March, 2021
We are excited to share this week’s picks for the research newsletter. We hope you’ll enjoy reading them over the weekend.

Requirement Engineering Challenges for AI-intense Systems Development
By Hans-Martin Heyn, Eric Knauss, Amna Pir Muhammad, Olof Erikssonz, Jennifer Linder, Padmini Subbiah, Shameer Kumar Pradhan, Sagar Tungal
Availability of powerful computation and communication technology as well as advances in artificial intelligence enable a new generation of complex, AI-intense systems and applications. Such systems and applications promise exciting improvements on a societal level, yet they also bring with them new challenges for their development. In this paper we argue that significant challenges relate to defining and ensuring behaviour and quality attributes of such systems and applications. We specifically derive four challenge areas from relevant use cases of complex, AI-intense systems and applications related to industry, transportation, and home automation: understanding, determining, and specifying (i) contextual definitions and requirements, (ii) data attributes and requirements, (iii) performance definition and monitoring, and (iv) the impact of human factors on system acceptance and success. Solving these challenges will imply process support that integrates new requirements engineering methods into development approaches for complex, AI-intense systems and applications. We present these challenges in detail and propose a research roadmap.
Perceiver: General Perception with Iterative Attention
Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture performs competitively or beyond strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 on ImageNet without convolutions and by directly attending to 50,000 pixels. It also surpasses state-of-the-art results for all modalities in AudioSet.
Did you enjoy this issue?
Priyansi, Junaid Rahim and Biswaroop Bhattacharjee

An opt-in weekly newsletter for the undergraduate research enthusiasts in KIIT. We intend to share interesting research articles and start conversations about the latest ideas in artificial intelligence, computer science and mathematics.

Every Friday, all subscribers will receive some research articles straight in their inbox. The papers will usually be a mix of that week’s popular research articles, review articles and some seminal papers in the various fields mentioned above.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue