Skip to yearly menu bar Skip to main content


Timezone: America/New_York

Registration Desk: Registration Check-in Desk Sat 23 Jul 07:00 a.m.  

Registration Check-in Desk closing at 6 pm. Badge pickup.


Workshop: AI for Agent-Based Modelling (AI4ABM) Sat 23 Jul 08:30 a.m.  

Christian Schroeder · Yang Zhang · Anisoara Calinescu · Dylan Radovic · Prateek Gupta · Jakob Foerster

Many of the world's most pressing issues, such as climate change, pandemics, financial market stability and fake news, are emergent phenomena that result from the interaction between a large number of strategic or learning agents. Understanding these systems is thus a crucial frontier for scientific and technology development that has the potential to permanently improve the safety and living standards of humanity. Agent-Based Modelling (ABM) (also known as individual-based modelling) is an approach toward creating simulations of these types of complex systems by explicitly modelling the actions and interactions of the individual agents contained within. However, current methodologies for calibrating and validating ABMs rely on human expert domain knowledge and hand-coded behaviours for individual agents and environment dynamics. Recent progress in AI has the potential to offer exciting new approaches to learning, calibrating, validation, analysing and accelerating ABMs. This interdisciplinary workshop is meant to bring together practitioners and theorists to boost ABM method development in AI, and stimulate novel applications across disciplinary boundaries - making ICML the ideal venue.Our inaugural workshop will be organised along two axes. First, we seek to provide a venue where ABM researchers from a variety of domains can introduce AI researchers to their respective domain problems. To this end, we are inviting a number of high-profile speakers across various application domains. Second, we seek to stimulate research into AI methods that can scale to large-scale agent-based models with the potential to redefine our capabilities of creating, calibrating, and validating such models. These methods include, but are not limited to, simulation-based inference, multi-agent learning, causal inference and discovery, program synthesis, and the development of domain-specific languages and tools that allow for tight integration of ABMs and AI approaches.


Workshop: Hardware-aware efficient training (HAET) Sat 23 Jul 08:45 a.m.  

Gonçalo Mordido · Yoshua Bengio · Ghouthi BOUKLI HACENE · Vincent Gripon · François Leduc-Primeau · Vahid Partovi Nia · Julie Grollier

To reach top-tier performance, deep learning models usually require a large number of parameters and operations, using considerable power and memory. Several methods have been proposed to tackle this problem by leveraging quantization of parameters, pruning, clustering of parameters, decompositions of convolutions, or using distillation. However, most of these works focus mainly on improving efficiency at inference time, disregarding the training cost. In practice, however, most of the energy footprint of deep learning results from training. Hence, this workshop focuses on reducing the training complexity of deep neural networks. Our aim is to encourage submissions specifically concerning the reduction in energy, time, or memory usage at training time. Topics of interest include but are not limited to: (i) compression methods for memory and complexity reduction during training, (ii) energy-efficient hardware architectures, (iii) energy-efficient training algorithms, (iv) novel energy models or energy efficiency training benchmarks, (v) practical applications of low-energy training.


Workshop: Complex feedback in online learning Sat 23 Jul 08:45 a.m.  

Rémy Degenne · Pierre Gaillard · Wouter Koolen · Aadirupa Saha

While online learning has become one of the most successful and studied approaches in machine learning, in particular with reinforcement learning, online learning algorithms still interact with their environments in a very simple way.The complexity and diversity of the feedback coming from the environment in real applications is often reduced to the observation of a scalar reward. More and more researchers now seek to exploit fully the available feedback to allow faster and more human-like learning.This workshop aims to present a broad overview of the feedback types being actively researched, highlight recent advances and provide a networking forum for researchers and practitioners.


The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward Sat 23 Jul 08:50 a.m.  

Huaxiu Yao · Hugo Larochelle · Percy Liang · Colin Raffel · Jian Tang · Ying WEI · Saining Xie · Eric Xing · Chelsea Finn

The past five years have seen rapid progress in large-scale pre-trained models across a variety of domains, such as computer vision, natural language processing, robotics, bioinformatics, etc. Leveraging a huge number of parameters, large-scale pre-trained models are capable of encoding rich knowledge from labeled and/or unlabeled examples. Supervised and self-supervised pre-training have been the two most representative paradigms, through which pre-trained models have demonstrated large benefits on a wide spectrum of downstream tasks. There are also other pre-training paradigms, e.g., meta-learning for few-shot learning, where pre-trained models are trained so that they quickly adapt to solve new tasks. However, there are still many remaining challenges and new opportunities ahead for pre-training, In this workshop, we propose to have the following two foci: (1) Which pre-training methods transfer across different applications/domains, which ones don't, and why? (2) In what settings should we expect pre-training to be effective, compared to learning from scratch?


Workshop on Human-Machine Collaboration and Teaming Sat 23 Jul 08:55 a.m.  

Umang Bhatt · Katie Collins · Maria De-Arteaga · Bradley Love · Adrian Weller

Machine learning (ML) approaches can support decision-making in key societal settings including healthcare and criminal justice, empower creative discovery in mathematics and the arts, and guide educational interventions. However, deploying such human-machine teams in practice raises critical questions, such as how a learning algorithm may know when to defer to a human teammate and broader systemic questions of when and which tasks to dynamically allocate to a human versus a machine, based on complementary strengths while avoiding dangerous automation bias. Effective synergistic teaming necessitates a prudent eye towards explainability and offers exciting potential for personalisation in interaction with human teammates while considering real-world distribution shifts. In light of these opportunities, our workshop offers a forum to focus and inspire core algorithmic developments from the ICML community towards efficacious human-machine teaming, and an open environment to advance critical discussions around the issues raised by human-AI collaboration in practice.


Workshop: Updatable Machine Learning Sat 23 Jul 08:55 a.m.  

Ayush Sekhari · Gautam Kamath · Jayadev Acharya

In modern ML domains, state-of-the-art performance is attained by highly overparameterized models that are expensive to train, costing weeks of time and millions of dollars. At the same time, after deploying the model, the learner may realize issues such as leakage of private data or vulnerability to adversarial examples. The learner may also wish to impose additional constraints post-deployment, for example, to ensure fairness for different subgroups. Retraining the model from scratch to incorporate additional desiderata would be expensive. As a consequence, one would instead prefer to update the model, which can yield significant savings of resources such as time, computation, and memory over retraining from scratch. Some instances of this principle in action include the emerging field of machine unlearning, and the celebrated paradigm of fine-tuning pretrained models. The goal of our workshop is to provide a platform to stimulate discussion about both the state-of-the-art in updatable ML and future challenges in the field.


Workshop: AI for Science Sat 23 Jul 09:00 a.m.  

Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Hanchen Wang · Connor Coley · Le Song · Linfeng Zhang · Marinka Zitnik

Machine learning (ML) has revolutionized a wide array of scientific disciplines, including chemistry, biology, physics, material science, neuroscience, earth science, cosmology, electronics, mechanical science. It has solved scientific challenges that were never solved before, e.g., predicting 3D protein structure, imaging black holes, automating drug discovery, and so on. Despite this promise, several critical gaps stifle algorithmic and scientific innovation in AI for Science: (1) Under-explored theoretical analysis, (2) Unrealistic methodological assumptions or directions, (3) Overlooked scientific questions, (4) Limited exploration at the intersections of multiple disciplines, (5) Science of science, (6) Responsible use and development of AI for science. However, very little work has been done to bridge these gaps, mainly because of the missing link between distinct scientific communities. While many workshops focus on AI for specific scientific disciplines, they are all concerned with the methodological advances within a single discipline (e.g., biology) and are thus unable to examine the crucial questions mentioned above. This workshop will fulfill this unmet need and facilitate community building; with hundreds of ML researchers beginning projects in this area, the workshop will bring them together to consolidate the fast growing area of AI for Science into a recognized field.


Workshop: Principles of Distribution Shift (PODS) Sat 23 Jul 09:00 a.m.  

Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski

The importance of robust predictions continues to grow as machine learning models are increasingly relied upon in high-stakes settings. Ensuring reliability in real-world applications remains an enormous challenge, particularly because data in the wild frequently differs substantially from the data on which models were trained. This phenomenon, broadly known as “distribution shift”, has become a major recent focus of the research community.

With the growing interest in addressing this problem has come growing awareness of the multitude of possible meanings of “distribution shift” and the importance of understanding the distinctions between them: which types of shift occur in the real world, and under which of these is generalization feasible? Negative results seem just as common as positive ones; where provable generalization is possible, it often depends on strong structural assumptions whose likelihood of holding in reality is questionable. Existing approaches often lack rigor and clarity with regards to the precise problem they are trying to solve. Some work has been done to precisely define distribution shift and to produce benchmarks which properly reflect real-world distribution shift, but overall there seems to be little communication between the communities tackling foundations and applications respectively. Recent strides have been made to move beyond tinkering, bringing much needed rigor to the field, and we hope to encourage this effort by opening a dialogue to share ideas between these communities.


Workshop: Responsible Decision Making in Dynamic Environments Sat 23 Jul 09:00 a.m.  

Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier

Algorithmic decision-making systems are increasingly used in sensitive applications such as advertising, resume reviewing, employment, credit lending, policing, criminal justice, and beyond. The long-term promise of these approaches is to automate, augment and/or eventually improve on the human decisions which can be biased or unfair, by leveraging the potential of machine learning to make decisions supported by historical data. Unfortunately, there is a growing body of evidence showing that the current machine learning technology is vulnerable to privacy or security attacks, lacks interpretability, or reproduces (and even exacerbates) historical biases or discriminatory behaviors against certain social groups.

Most of the literature on building socially responsible algorithmic decision-making systems focus on a static scenario where algorithmic decisions do not change the data distribution. However, real-world applications involve nonstationarities and feedback loops that must be taken into account to measure and mitigate fairness in the long-term. These feedback loops involve the learning process which may be biased because of insufficient exploration, or changes in the environment's dynamics due to strategic responses of the various stakeholders. From a machine learning perspective, these sequential processes are primarily studied through counterfactual analysis and reinforcement learning.

The purpose of this workshop is to bring together researchers from both industry and academia working on the full spectrum of responsible decision-making in dynamic environments, from theory to practice. In particular, we encourage submissions on the following topics: fairness, privacy and security, robustness, conservative and safe algorithms, explainability and interpretability.


Workshop: Continuous Time Perspectives in Machine Learning Sat 23 Jul 09:00 a.m.  

Mihaela Rosca · Chongli Qin · Julien Mairal · Marc Deisenroth

In machine learning, discrete time approaches such as gradient descent algorithms and discrete building layers for neural architectures have traditionally dominated. Recently, we have seen that by bridging these discrete systems with their continuous counterparts we can not only develop new insights but we can construct novel and competitive ML approaches. By leveraging time, we can tap into the centuries of research such as dynamical systems, numerical integration and differential equations, and continue enhancing what is possible in ML.The workshop aims to to disseminate knowledge about the use of continuous time methods in ML; to create a discussion forum and create a vibrant community around the topic; to provide a preview of what dynamical system methods might further bring to ML; to find the biggest hurdles in using continuous time systems in ML and steps to alleviate them; to showcase how continuous time methods can enable ML to have large impact in certain application domains, such as climate prediction and physical sciences.Recent work has shown that continuous time approaches can be useful in ML, but their applicability can be extended by increasing the visibility of these methods, fostering collaboration and an interdisciplinary approach to ensure their long-lasting impact. We thus encourage submissions with a varied set of topics: the intersection of machine learning and continuous-time methods; the incorporation of knowledge of continuous systems to analyse and improve on discrete approaches; the exploration of approaches from dynamical systems and related fields to machine learning; the software tools from the numerical analysis community.We have a diverse set of confirmed speakers and panellists with expertise in architectures, optimisation, RL, generative models, numerical analysis, gradient flows and climate. We hope this will foster an interdisciplinary and collaborative environment cohesive for the development of new research ideas.


The ICML Expressive Vocalizations (ExVo) Workshop and Competition 2022 Sat 23 Jul 09:00 a.m.  

Alice Baird · Panagiotis Tzirakis · Kory Mathewson · Gauthier Gidel · Eilif Muller · Bjoern Schuller · Erik Cambria · Dacher Keltner · Alan Cowen

The ICML Expressive Vocalizations (ExVo) Workshop and Competition 2022 introduces, for the first time in a competition setting, the machine learning problem of understanding and generating vocal bursts – a wide range of emotional non-linguistic utterances. Participants of ExVo are presented with three tasks that utilize a single dataset. The dataset and three tasks draw attention to new innovations in emotion science and capture 10 dimensions of emotion reliably perceived in distinct vocal bursts: Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness and Surprise. Of particular interest to the ICML community, these tasks highlight the need for advanced machine learning techniques for multi-task learning, audio generation, and personalized few-shot learning of nonverbal expressive style.

With studies of vocal emotional expression often relying on significantly smaller datasets insufficient to apply the latest machine learning innovations, the ExVo competition and workshop provides an unprecedented platform for the development and discussion of novel strategies for understanding vocal bursts and will enable unique forms of collaborations by leading researchers from diverse disciplines. Organized by leading researchers in emotion science and machine learning, the following three tasks are proposed: the Multi-task High-Dimensional Emotion, Age & Country Task (ExVo Multi-Task); the Generative Emotional Vocal Burst Task (ExVo Generate); and the Few-Shot Emotion Recognition task (ExVo Few-Shot).

Important dates (AoE)
- Challenge Opening (data available): April 1, 2022.
- Baselines and paper released: April 8, 2022.
- ExVo MultiTask submission deadline: May 12, 2022.
- ExVo Few-Shot (test-labels): May 13, 2022.
- Workshop paper submission: ~~May 20, 2022~~ Extended June 6 2022.

For those interested in submitting research to the ExVo workshop outside of the competition, we encourage contributions covering the following topics:
- Detecting and Understanding Vocal Emotional Behavior
- Multi-Task Learning in Affective Computing
- Generating Nonverbal Vocalizations or Speech Prosody
- Personalized Machine Learning for Affective Computing
- Other topics related to Affective Verbal and Nonverbal Vocalization


Workshop: Disinformation Countermeasures and Machine Learning (DisCoML) Sat 23 Jul 09:00 a.m.  

George Cybenko · Ludmilla Huntsman · Steve Huntsman · Paul Vines

The Disinformation Countermeasures and Machine Learning (DisCoML) workshop at ICML 2022 in Baltimore will address machine learning techniques to counter disinformation. Today, disinformation is an important challenge that all governments and their citizens face, affecting politics, public health, financial markets, and elections. Specific examples such as lynchings catalyzed by disinformation spread over social media highlight that the threat it poses crosses social scales and boundaries. This threat even extends into the realm of military combat, as a recent NATO StratCom experiment highlighted. Machine learning plays a central role in the production and propagation of dissemination. Bad actors scale disinformation operations by using ML-enabled bots, deepfakes, cloned websites, and forgeries. The situation is exacerbated by proprietary algorithms of search engines and social media platforms, driven by advertising models, that can effectively isolate internet users from alternative information and viewpoints. In fact, social media's business model, with its behavioral tracking algorithms, is arguably optimized for launching a global pandemic of cognitive hacking. Machine learning is also essential for identifying and inhibiting the spread of disinformation at internet speed and scale, but DisCoML welcomes approaches that contribute to countering disinformation in a broad sense. While the "cybersecurity paradox"–i.e. increased technology spending has not equated to an improved security posture–also applies to disinformation and indicates the need to address human behavior, there is an arms race quality to both problems. This suggests that technology, and ML in particular, will play a central role in countering disinformation well into the future. DisCoML will provide a forum for bringing leading researchers together and enabling stakeholders and policymakers to get up to date on the latest developments in the field.


2nd Workshop on Interpretable Machine Learning in Healthcare (IMLH) Sat 23 Jul 09:15 a.m.  

Ramin Zabih · S. Kevin Zhou · Weina Jin · Yuyin Zhou · Ipek Oguz · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of the existing ML approach inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could ultimately facilitate the deployment. In addition, it is essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more aligned with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions. In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, public health, computational biology, biomedical informatics, and clinical fields to facilitate discussions including related challenges, definition, formalisms, and evaluation protocols regarding interpretable medical machine intelligence. The workshop appeals to ICML audiences as interpretability is a major challenge to deploy ML in critical domains such as healthcare. By providing a platform that fosters potential collaborations and discussions between attendees, we hope the workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.


Queer in AI @ ICML 2022 Affinity Workshop Sat 23 Jul 09:15 a.m.  

Huan Zhang · Arjun Subramonian · Sharvani Jha · William Agnew · Krunoslav Lehman Pavasovic

Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of underrepresented queer communities, especially transgender, non-binary folks and queer BIPOC folks have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusiveness of non-Western non-binary identities; and Black, Indigenous, and Pacific Islander non-cis folks. Additionally, this proposal outlines making virtual/hybrid conferences more inclusive of queer folks.


Workshop on Distribution-Free Uncertainty Quantification Sat 23 Jul 09:20 a.m.  

Anastasios Angelopoulos · Stephen Bates · Sharon Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. A recent line of work we call distribution-free predictive inference (i.e., conformal prediction and related methods) has developed a set of methods that give finite-sample statistical guarantees for any (possibly incorrectly specified) predictive model and any (unknown) underlying distribution of the data, ensuring reliable uncertainty quantification (UQ) for many prediction tasks. This line of work represents a promising new approach to UQ with complex prediction systems but is relatively unknown in the applied machine learning community. Moreover, much remains to be done integrating distribution-free methods with existing approaches to modern machine learning in computer vision, natural language, reinforcement learning, and so on -- little work has been done to bridge these two worlds. To facilitate the emerging topics on distribution-free methods, the proposed workshop has two goals. First, to bring together researchers in distribution-free methods with researchers specializing in applications of machine learning to catalyze work at this interface. Second, to bring together the existing community of distribution-free uncertainty quantification research, as no other workshop like this exists at a major conference. Given the important recent emphasis on the reliable real-world performance of ML models, we believe a large fraction of ICML attendees will find this workshop highly relevant.