Publication
|
|
Data Instance Prior for Transfer Learning in GANs
Puneet Mangla*, Nupur Kumari*, Mayank Singh*, Vineeth N Balasubramanian, Balaji Krishnamurthy
We propose a novel transfer learning method for GANs in the limited data domain by leveraging informative data prior derived from self-supervised/supervised pre-trained networks trained on a diverse source domain. We demonstrate that the proposed method effectively transfers knowledge to domains with few target images.
arxiv pre-print
|
|
LT-GAN: Self-Supervised GAN with Latent Transformation Detection
Parth Patel*, Nupur Kumari*, Mayank Singh*, Balaji Krishnamurthy,
We propose LT-GAN, a self-supervised approach for GAN training to improve the generation quality and image diversity by estimating GAN-induced transformations (i.e. transformations induced in the generated images by perturbing the latent space of generator).
Work accepted at WACV, 2021.
|
|
Attributional Robustness Training using Input-Gradient Spatial Alignment
Nupur Kumari*, Mayank Singh*, Puneet Mangla, Abhishek Sinha, Vineeth N Balasubramanian, Balaji Krishnamurthy
We propose a robust attribution training methodology ART that maximizes the alignment between the input and its attribution map. ART achieves state-of-the-art performance in attributional robustness and weakly supervised object localization on CUB dataset. It also induces immunity to adversarial and common perturbations on standard vision datasets.
Work acccepted at ECCV 2020.
[Paper]
[Webpage]
[Code]
|
|
Charting the Right Manifold: Manifold Mixup for Few-shot Learning
Puneet Mangla*, Nupur Kumari*, Abhishek Sinha*, Mayank Singh*, Vineeth N Balasubramanian, Balaji Krishnamurthy
Used self-supervision techniques - rotation and exemplar, followed by manifold mixup for few-shot classification tasks.
The proposed approach beats the current state-of-the-art accuracy on mini-ImageNet, CUB and CIFAR-FS datasets by 3-8%.
Work accepted at WACV, 2020.
[Paper]
[Code]
|
|
On the Benefits of Models with Perceptually-Aligned Gradients
Gunjan Aggarwal*, Abhishek Sinha*, Nupur Kumari*, Mayank Singh*
In this paper, we leverage models with perceptually-aligned gradients and show that adversarial training with low max-perturbation bound can improve the performance of models on zero-shot transfer learning tasks and weakly supervised object localization.
Work accepted at ICLR workshop Towards Trustworthy ML, 2020.
|
|
ShapeVis: High-dimensional Data Visualization at Scale
Nupur Kumari*,Siddarth Ramesh*, Akash Rupela*, Piyush Gupta*,Balaji Krishnamurthy
In this paper, we propose ShapeVis, a scalable visualization technique for point cloud data inspired from topological data analysis. Our method captures the underlying geometric and topological structure of the data in a compressed graphical representation.
Work accepted at WWW, 2020.
|
|
Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models
Nupur Kumari*, Mayank Singh*, Abhishek Sinha*, Harshitha Machiraju, Balaji Krishnamurthy, Vineeth N Balasubramanian
We analyzed robust models for vulnerability against adversarial perturbations at the latent layers and proposed a finetuning algorithm which adversarailly trains latent layers of network. The algorithm achieved the state-of-the art adversarial accuracy against strong adversarial attacks.
Work accepted at IJCAI, 2019.
|
|
Multimapper: Data Density Sensitive Topological Visualization
Bishal Deb, Ankita Sarkar, Nupur Kumari, Akash Rupela, Piyush Gupta,Balaji Krishnamurthy
We proposed an improvement over Mapper (a topological data visualization algorithm) that reduces the
obscuration of topological information by facilitating the cover selection in data density sensitive manner.
Work accepted at ICDM Workshop 2018.
|
|
Understanding Adversarial Space through the lens of Attribution
Nupur Kumari*, Mayank Singh*, Abhishek Sinha*, Balaji Krishnamurthy
We use the attribution map of images to train an auxiliary model for detecting adversarial examples of the original image classification network.
Work accepted at Nemesis ECML Workshop 2018.
|
* denotes equal contribution