Selected work

A complete list of my publications is available in Google Scholar.

Plex: Towards Reliability Using Pretrained Large Model Extensions

Do 1-layer changes to improve model robustness still help when we have 1 billion parameters? 

Yes, and one-layer changes can be stacked to provide benefits across large vision and language models.

Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band, Tim G. J. Rudner, Karan Singhal , Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, Balaji Lakshminarayanan.

Bay Area Machine Learning Symposium, 2022 (Spotlight Talk)

ICML Pre-training Workshop, 2022 (Contributed Talk, 5.9% of accepted papers)

ICML Principles of Distribution Shift Workshop, 2022

paper | code | bibtex | Google AI blog

Reliability benchmarks for image segmentation

Do 1-layer changes to improve model robustness still help when in tasks beyond classification

Yes, for tasks such as image segmentation, one-layer changes can be stacked to provide benefits even when we have multiple distribution shifts at the same time.

E. Kelly Buchanan, Michael W Dusenberry, Jie Ren, Kevin Patrick Murphy, Balaji Lakshminarayanan, Dustin Tran

NeurIPS 2022 Workshop on Distribution Shifts 

paper | code | bibtex | Google AI blog

Deep Ensembles Work, But Are They Necessary?

Are the gains from deep ensembles unique to deep ensembles?

No. While ensembles are usually compared with their components, in this work we compare ensembles with another single model with similar InD performance. 

We show that ensembling gains are not unique.

Taiga Abe*, E. Kelly Buchanan*, Geoff Pleiss, Rich Zemel, John Cunningham

NeurIPS 2022 | code | bibtex

Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders

Some behavioral features, such as chewing or grimacing, cannot be easily tracked. In this work we disentangle such behaviors in video data by incorporating the other features which can be easily tracked.

Matthew R Whiteway, Dan Biderman, Yoni Friedman, Mario Dipoppa, E. Kelly Buchanan, Anqi Wu, John Zhou, Jean-Paul R Noel, John Cunningham, Liam Paninski

PLOS Computational Biology 2021 | code | bibtex

Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking

Can we track poses using less data?

Yes, by exploiting the spatial and temporal structure inherent in videos.

Anqi Wu*, E. Kelly Buchanan*, Matthew Whiteway, Michael Schartner, Guido Meijer, Jean-Paul Noel, Erica Rodriguez, Claire Everett, Amy Norovich, Evan Schaffer, Neeli Mishra, C. Daniel Salzman, Dora Angelaki, Andrés Bendesky, The International Brain Laboratory, John Cunningham, Liam Paninski

NeurIPS 2020 | code | bibtex

Penalized matrix decomposition for denoising, compression, and improved demixing of funcational imaging data

This paper shows that we can efficiently separate activity from different neurons through structure aware rank-1 penalized matrix decompositions.

E. Kelly Buchanan*, Ian Kinsella*, Ding Zhou*, Rong Zhu, Pengcheng Zhou, Felipe Gerhard, John Ferrante, Ying Ma, Sharon Kim, Mohammed Shaik, Yajie Liang, Rongwen Lu, Jacob Reimer, Paul Fahey, Taliah Muhammad, Graham Dempsey, Elizabeth Hillman, Na Ji, Andreas Tolias, Liam Paninski

arXiv 2019 | code | bibtex | press

Quantifying the behavioral dynamics of C. elegans with autoregressive hidden Markov models

In this paper we extract interpretable behavioral syllables of C. elegans using a class of switching linear dynamical systems (Spotlight presentation)

E. Kelly Buchanan, Akiva Lipshitz, Scott Linderman, Liam Paninski

NEURIPS 2017 WNIP | code | bibtex


Some ideas that didn't work can be found here.