View profile

Weekly Research Newsletter - KIIT

Weekly Research Newsletter - KIIT
28th Februrary 2021
We are excited to share this week’s picks for the research newsletter. We hope you’ll enjoy reading them over the weekend.

TransGAN: Two Transformers Can Make One Strong GAN
By Yifan Jiang, Shiyu Chang, Zhangyang Wang
The recent explosive interest on transformers has suggested their potential to become powerful “universal” models for computer vision tasks, such as classification, detection, and segmentation. However, how further transformers can go - are they ready to take some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs)? Driven by that curiosity, we conduct the first pilot study in building a GAN completely free of convolutions, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed TransGAN, consists of a memory-friendly transformer-based generator that progressively increases feature resolution while decreasing embedding dimension, and a patch-level discriminator that is also transformer-based. We then demonstrate TransGAN to notably benefit from data augmentations (more than standard GANs), a multi-task co-training strategy for the generator, and a locally initialized self-attention that emphasizes the neighborhood smoothness of natural images. Equipped with those findings, TransGAN can effectively scale up with bigger models and high-resolution image datasets. Specifically, our best architecture achieves highly competitive performance compared to current state-of-the-art GANs based on convolutional backbones. Specifically, TransGAN sets new state-of-the-art IS score of 10.10 and FID score of 25.32 on STL-10. It also reaches competitive 8.64 IS score and 11.89 FID score on Cifar-10, and 12.23 FID score on CelebA 64×64, respectively. We also conclude with a discussion of the current limitations and future potential of TransGAN. The code is available at here
A Deep-Learning Approach For Direct Whole-Heart Mesh Reconstruction
By Fanwei Kong, Nathan Wilson, Shawn C. Shadden
Automated construction of surface geometries of cardiac structures from volumetric medical images is important for a number of clinical applications. While deep-learning based approaches have demonstrated promising reconstruction precision, these approaches have mostly focused on voxel-wise segmentation followed by surface reconstruction and post-processing techniques. However, such approaches suffer from a number of limitations including disconnected regions or incorrect surface topology due to erroneous segmentation and stair-case artifacts due to limited segmentation resolution. We propose a novel deep-learning-based approach that directly predicts whole heart surface meshes from volumetric CT and MR image data. Our approach leverages a graph convolutional neural network to predict deformation on mesh vertices from a pre-defined mesh template to reconstruct multiple anatomical structures in a 3D image volume. Our method demonstrated promising performance of generating high-resolution and high-quality whole heart reconstructions and outperformed prior deep-learning based methods on both CT and MR data in terms of precision and surface quality. Furthermore, our method can more efficiently produce temporally-consistent and feature-corresponding surface mesh predictions for heart motion from CT or MR cine sequences, and therefore can potentially be applied for efficiently constructing 4D whole heart dynamics.
Did you enjoy this issue?
Priyansi, Junaid Rahim and Biswaroop Bhattacharjee

An opt-in weekly newsletter for the undergraduate research enthusiasts in KIIT. We intend to share interesting research articles and start conversations about the latest ideas in artificial intelligence, computer science and mathematics.

Every Friday, all subscribers will receive some research articles straight in their inbox. The papers will usually be a mix of that week’s popular research articles, review articles and some seminal papers in the various fields mentioned above.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue