Danica J. Sutherland she/her
dsuth[a t]cs.ubc.ca;
CV;
orcid;
github;
crossvalidated;
twitter
I'm an Assistant Professor in UBC Computer Science.
If you're interested in talking or collaborating, get in touch!
Prospective students, though, should simply apply to the department and indicate your interest in working with me in the application.
I previously published under a different name, and not all papers are corrected yet, though they should be. If you're citing my papers (and please do), please use the name "Danica J. Sutherland." If you get pushback from editors or so on, "D.J. Sutherland" is also acceptable.
My current research interests include:
- Learning and testing on sets and distributions: two-sample tests, evaluating and training implicit generative models (e.g. GANs), density estimation, distribution regression.
- Learning “deep kernels”, and representation learning more broadly.
- Statistical learning theory in general.
I was previously
a Research Assistant Professor at TTIC,
before that a postdoc
with Arthur Gretton
at the Gatsby Computational Neuroscience Unit, University College London,
and I did my Ph.D. at Carnegie Mellon University
with Jeff Schneider.
Publications and selected talks are listed below.
Publications
Below, ** denotes equal contribution.
Also available as a .bib file,
and most of these are on
Google Scholar (but see above).
Coauthor filters:
(show)
(hide)
-
Michael Arbel (3)
-
Mikołaj Bińkowski (2)
-
Seth Flaxman (3)
-
Roman Garnett (2)
-
Arthur Gretton (6)
-
Ho Chung Leon Law (2)
-
Yifei Ma (3)
-
Michelle Ntampaka (3)
-
Junier B. Oliva (4)
-
Barnabás Póczos (9)
-
Jeff Schneider (11)
-
Dino Sejdinovic (2)
-
Nathan Srebro (2)
-
Heiko Strathmann (3)
-
Hy Trac (3)
-
Liang Xiong (2)
Preprints
Does Invariant Risk Minimization Capture Invariance?.
Pritish Kamath,
Akilesh Tangella,
Danica J. Sutherland, and
Nathan Srebro.
Preprint
2021.
Journal and Low-Acceptance-Rate Conference Papers
On the Error of Random Fourier Features.
Danica J. Sutherland and Jeff Schneider.
Uncertainty in Artificial Intelligence
(UAI)
2015.
Chapter 3 / Section 4.1 of my thesis supersedes this paper, fixing a few errors in constants and providing more results.
Active learning and search on low-rank matrices.
Danica J. Sutherland,
Barnabás Póczos, and
Jeff Schneider.
Knowledge Discovery and Data Mining
(KDD)
2013.
Selected for oral presentation.
Dissertations
Integrating Human Knowledge into a Relational Learning System.
Danica J. Sutherland.
Computer Science Department, Swarthmore College. B.A. thesis,
2011.
Technical Reports, Posters, etc.
Unbiased estimators for the variance of MMD estimators.
Danica J. Sutherland.
Technical report
2019.
The Role of Machine Learning in the Next Decade of Cosmology.
Michelle Ntampaka,
Camille Avestruz,
Steven Boada,
João Caldeira,
Jessi Cisewski-Kehe,
Rosanne Di Stefano,
Cora Dvorkin,
August E. Evrard,
Arya Farahi,
Doug Finkbeiner,
Shy Genel,
Alyssa Goodman,
Andy Goulding,
Shirley Ho,
Arthur Kosowsky,
Paul La Plante,
François Lanusse,
Michelle Lochner,
Rachel Mandelbaum,
Daisuke Nagai,
Jeffrey A. Newman,
Brian Nord,
J. E. G. Peek,
Austin Peel,
Barnabás Póczos,
Markus Michael Rau,
Aneta Siemiginowska,
Danica J. Sutherland,
Hy Trac, and
Benjamin Wandelt.
White paper
2019.
Fixing an error in Caponnetto and de Vito (2007).
Danica J. Sutherland.
Technical report
2017.
Understanding the 2016 US Presidential Election using ecological inference and distribution regression with census microdata.
Seth Flaxman,
Danica J. Sutherland,
Yu-Xiang Wang, and
Yee Whye Teh.
Technical report
2016.
List Mode Regression for Low Count Detection.
Jay Jin,
Kyle Miller,
Danica J. Sutherland,
Simon Labov,
Karl Nelson, and
Artur Dubrawski.
IEEE Nuclear Science Symposium
(IEEE NSS/MIC)
2016.
Grounding Conceptual Knowledge with Spatio-Temporal Multi-Dimensional Relational Framework Trees.
Matthew Bodenhamer,
Thomas Palmer,
Danica J. Sutherland, and
Andrew H. Fagg.
Technical report
2012.
Invited talks
Slides for conference and workshop talks directly for a paper are linked next to the paper above.
Introduction to Generative Adversarial Networks.
June 2019.
Machine Learning Crash Course
(MLCC).