Huge thanks to my amazing collaborator Imant Daunhawer and Amartya Sanyal. This is a project two years in the making and very close to each of our hearts. Generalisation is such an important problem in ML and there is much more to do – watch this space! (8/8)
23.1.2023 14:23Huge thanks to my amazing collaborator Imant Daunhawer and Amartya Sanyal. This is a project two years in the making and very close to each...The proof is very much in the pudding: many more details to check out in the paper, including selection of metrics, methods, as well as insights on why unsupervised learning performs well under extreme distribution shift. Check out our paper! (7/n)
23.1.2023 14:22The proof is very much in the pudding: many more details to check out in the paper, including selection of metrics, methods, as well as...One more thing: Compared to supervised learning, unsupervised learning methods consistently suffers from less performance drop going from in-distribution to out-of-distribution evaluation! 🤯🤯 More generalisation wins for not using labels. (6/n)
23.1.2023 14:22One more thing: Compared to supervised learning, unsupervised learning methods consistently suffers from less performance drop going from...We verify this by subsampling realistic DG datasets to create extreme distribution shift between train and test splits, and find unsupervised learning to perform much better on these extreme versions of realistic DG datasets! (5/n)
23.1.2023 14:22We verify this by subsampling realistic DG datasets to create extreme distribution shift between train and test splits, and find...So why the discrepency in performance on synthetic vs. realistic datasets? We observe that the former features more extreme distribution shift, while the latter features subtle, nuaunced shifts.
Are unsupervised learning better at handling extreme distribution shift?
(4/n)
On realisitic domain generalisation datasets, supervised learning gains more momentum: on Camelyon17 it is second to SSL, but on FMoW it is the best performing method.
Still, considering that SSL requires no label during training, it's amazing how competitive it is! (3/n)
23.1.2023 14:21On realisitic domain generalisation datasets, supervised learning gains more momentum: on Camelyon17 it is second to SSL, but on FMoW it is...We find that unsupervised learning methods significantly outperforms supervised ones on synthetic domain generalisation datasets 🤯 (e.g. CdSprites, MNIST-CIFAR)
The curse of simplicity bias observed in supervised learning is successfully avoided by SSL/Autoencoders! (2/n)
23.1.2023 14:21We find that unsupervised learning methods significantly outperforms supervised ones on synthetic domain generalisation datasets 🤯 (e.g....How robust are unsupervised representation learning methods (e.g. SSL) to distirbution shift compared to supervised learning?
𝐒𝐡𝐨𝐫𝐭 𝐚𝐧𝐬𝐰𝐞𝐫: Quite!
𝐋𝐨𝐧𝐠 𝐚𝐧𝐬𝐰𝐞𝐫: Our #ICLR2023 paper http://arxiv.org/pdf/2206.08871.pdf
Joint work with Imant Daunhawer & Amartya Sanyal @amartya
23.1.2023 14:21How robust are unsupervised representation learning methods (e.g. SSL) to distirbution shift compared to supervised...RT @mrlworkshop@twitter.com
Is having multiple modalities a blessing or a curse? What is a good representation? Let's find out together!
We are proud to announce the 1st hybrid workshop on Multimodal Representation Learning at ICLR2023 🚀
More info: https://mrl-workshop.github.io/iclr-2023/
Organisers: Miguel Vasco, Adrian Javaloy, Imant Daunhawer, Petra Poklukar, Isabel Valera, Danica Kragic, Yuge Shi
Hi old and new friends :) I am a -1 month phd student at university of oxford, and a -1 week intern at Google Brain. I work on unsupervised representation learning for images and I like bananas
6.11.2022 17:43#introduction Hi old and new friends :) I am a -1 month phd student at university of oxford, and a -1 week intern at Google Brain. I work on...