Skip to content

Commit 36e7d39

Browse files
committed
feat(uai-2023): add keynote and oral session videos
- Introduce new video entries for UAI 2023, including: - Keynote talks by Victor Chernozhukov, Caroline Uhler, and Alexandra Chouldechova. - Oral sessions covering various topics in causal ML. - Update existing video metadata for consistency.
1 parent 51e8563 commit 36e7d39

File tree

32 files changed

+1169
-1
lines changed

32 files changed

+1169
-1
lines changed
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
{
2+
"description": "Keynote talk 3: Caroline Uhler. Causal Representation Learning & Optimal Intervention Design. (session chair: Daniel Malinsky)",
3+
"duration": 4105,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Caroline Uhler"
14+
],
15+
"tags": [
16+
"Keynote"
17+
],
18+
"thumbnail_url": "https://i.ytimg.com/vi/fKc509CrfUY/maxresdefault.jpg",
19+
"title": "Causal Representation Learning & Optimal Intervention Design",
20+
"videos": [
21+
{
22+
"type": "youtube",
23+
"url": "https://www.youtube.com/watch?v=fKc509CrfUY"
24+
}
25+
]
26+
}
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
{
2+
"description": "Keynote talk 2. Victor Chernozhukov. Long Story Short: Omitted Variable Bias in Causal Machine Learning. (session chair: Ilya Shpitser)",
3+
"duration": 3356,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Victor Chernozhukov"
14+
],
15+
"tags": [
16+
"Keynote"
17+
],
18+
"thumbnail_url": "https://i.ytimg.com/vi/IPcfCDESP5E/maxresdefault.jpg",
19+
"title": "Long Story Short: Omitted Variable Bias in Causal ML",
20+
"videos": [
21+
{
22+
"type": "youtube",
23+
"url": "https://www.youtube.com/watch?v=IPcfCDESP5E"
24+
}
25+
]
26+
}
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
{
2+
"description": "Introduction by Kun Zhang and Opening remarks by Richard Scheines and Peter Spirtes\n\nKeynote Talk 1: Alexandra Chouldechova. Algorithms in Unjust Systems. (session chair: Peter Spirtes)",
3+
"duration": 3957,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
}
11+
],
12+
"speakers": [
13+
"Alexandra Chouldechova"
14+
],
15+
"tags": [
16+
"Keynote"
17+
],
18+
"thumbnail_url": "https://i.ytimg.com/vi_webp/8fmfTjcC1Ug/maxresdefault.webp",
19+
"title": "Algorithms in Unjust Systems",
20+
"videos": [
21+
{
22+
"type": "youtube",
23+
"url": "https://www.youtube.com/watch?v=8fmfTjcC1Ug"
24+
}
25+
]
26+
}
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
{
2+
"description": "\"An Improved Variational Approximate Posterior for the Deep Wishart Process\" \nSebastian W. Ober, Ben Anson, Edward Milsom, Laurence Aitchison\n(https://proceedings.mlr.press/v216/ober23a.html)\n\nAbstract\nDeep kernel processes are a recently introduced class of deep Bayesian models that have the flexibility of neural networks, but work entirely with Gram matrices. They operate by alternately sampling a Gram matrix from a distribution over positive semi-definite matrices, and applying a deterministic transformation. When the distribution is chosen to be Wishart, the model is called a deep Wishart process (DWP). This particular model is of interest because its prior is equivalent to a deep Gaussian process (DGP) prior, but at the same time it is invariant to rotational symmetries, leading to a simpler posterior distribution. Practical inference in the DWP was made possible in recent work (\u201cA variational approximate posterior for the deep Wishart process\u201d Ober and Aitchison, 2021a) where the authors used a generalisation of the Bartlett decomposition of the Wishart distribution as the variational approximate posterior. However, predictive performance in that paper was less impressive than one might expect, with the DWP only beating a DGP on a few of the UCI datasets used for comparison. In this paper, we show that further generalising their distribution to allow linear combinations of rows and columns in the Bartlett decomposition results in better predictive performance, while incurring negligible additional computation cost.\n\nSlides: https://www.auai.org/uai2023/oral_slides/402-oral-slides.pdf",
3+
"duration": 1457,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://proceedings.mlr.press/v216/ober23a.html",
13+
"url": "https://proceedings.mlr.press/v216/ober23a.html"
14+
},
15+
{
16+
"label": "https://www.auai.org/uai2023/oral_slides/402-oral-slides.pdf",
17+
"url": "https://www.auai.org/uai2023/oral_slides/402-oral-slides.pdf"
18+
}
19+
],
20+
"speakers": [
21+
"Sebastian W. Ober",
22+
"Ben Anson",
23+
"Edward Milsom",
24+
"Laurence Aitchison"
25+
],
26+
"tags": [],
27+
"thumbnail_url": "https://i.ytimg.com/vi/xmsLlaIWoUI/maxresdefault.jpg",
28+
"title": "An Improved Variational Approximate Posterior for the Deep Wishart",
29+
"videos": [
30+
{
31+
"type": "youtube",
32+
"url": "https://www.youtube.com/watch?v=xmsLlaIWoUI"
33+
}
34+
]
35+
}
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
{
2+
"description": "\"MixupE: Understanding and Improving Mixup from Directional Derivative Perspective\" \nYingtian Zou, Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi\n(https://proceedings.mlr.press/v216/zou23a.html)\n\nAbstract\nMixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. Based on this new insight, we propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.\n\nSlides: https://www.auai.org/uai2023/oral_slides/129-oral-slides.pdf",
3+
"duration": 1658,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://www.auai.org/uai2023/oral_slides/129-oral-slides.pdf",
13+
"url": "https://www.auai.org/uai2023/oral_slides/129-oral-slides.pdf"
14+
},
15+
{
16+
"label": "https://proceedings.mlr.press/v216/zou23a.html",
17+
"url": "https://proceedings.mlr.press/v216/zou23a.html"
18+
}
19+
],
20+
"speakers": [
21+
"Yingtian Zou",
22+
"Vikas Verma",
23+
"Sarthak Mittal",
24+
"Wai Hoh Tang",
25+
"Hieu Pham",
26+
"Juho Kannala",
27+
"Yoshua Bengio",
28+
"Arno Solin",
29+
"Kenji Kawaguchi"
30+
],
31+
"tags": [],
32+
"thumbnail_url": "https://i.ytimg.com/vi/Fnwi35bNZbo/maxresdefault.jpg",
33+
"title": "MixupE Understanding and Improving Mixup from Directional Derivative",
34+
"videos": [
35+
{
36+
"type": "youtube",
37+
"url": "https://www.youtube.com/watch?v=Fnwi35bNZbo"
38+
}
39+
]
40+
}
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
{
2+
"description": "\"Neural Probabilistic Logic Programming in Discrete-Continuous Domains\" Lennert De Smet, Pedro Zuidberg Dos Martires, Robin Manhaeve, Giuseppe Marra, Angelika Kimmig, Luc De Raedt\n(https://proceedings.mlr.press/v216/de-smet23a.html)\n\nAbstract\nNeural-symbolic AI (NeSy) allows neural networks to exploit symbolic background knowledge in the form of logic. It has been shown to aid learning in the limited data regime and to facilitate inference on out-of-distribution data. Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory, which additionally allows learning under uncertainty. A major limitation of current probabilistic NeSy systems, such as DeepProbLog, is their restriction to finite probability distributions, i.e., discrete random variables. In contrast, deep probabilistic programming (DPP) excels in modelling and optimising continuous probability distributions. Hence, we introduce DeepSeaProbLog, a neural probabilistic logic programming language that incorporates DPP techniques into NeSy. Doing so results in the support of inference and learning of both discrete and continuous probability distributions under logical constraints. Our main contributions are 1) the semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a proven asymptotically unbiased learning algorithm, and 3) a series of experiments that illustrate the versatility of our approach.\n\nSlides: https://www.auai.org/uai2023/oral_slides/233-oral-slides.pdf",
3+
"duration": 1312,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://proceedings.mlr.press/v216/de-smet23a.html",
13+
"url": "https://proceedings.mlr.press/v216/de-smet23a.html"
14+
},
15+
{
16+
"label": "https://www.auai.org/uai2023/oral_slides/233-oral-slides.pdf",
17+
"url": "https://www.auai.org/uai2023/oral_slides/233-oral-slides.pdf"
18+
}
19+
],
20+
"speakers": [
21+
"Lennert De Smet",
22+
"Pedro Zuidberg Dos Martires",
23+
"Robin Manhaeve",
24+
"Giuseppe Marra",
25+
"Angelika Kimmig",
26+
"Luc De Raedt"
27+
],
28+
"tags": [],
29+
"thumbnail_url": "https://i.ytimg.com/vi/eqWOMycQOk4/maxresdefault.jpg",
30+
"title": "Neural Probabilistic Logic Programming in Discrete Continuous Domains",
31+
"videos": [
32+
{
33+
"type": "youtube",
34+
"url": "https://www.youtube.com/watch?v=eqWOMycQOk4"
35+
}
36+
]
37+
}
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
{
2+
"description": "\"On Minimizing the Impact of Dataset Shifts on Actionable Explanations \"\nAnna P. Meyer, Dan Ley, Suraj Srinivas, Himabindu Lakkaraju\n(https://proceedings.mlr.press/v216/meyer23a.html)\n\nAbstract\nThe Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice. For instance, models are periodically retrained to handle dataset shifts. This process may invalidate some of the previously prescribed explanations, thus rendering them unactionable. But, it is unclear if and when such invalidations occur, and what factors determine explanation stability i.e., if an explanation remains unchanged amidst model retraining due to dataset shifts. In this paper, we address the aforementioned gaps and provide one of the first theoretical and empirical characterizations of the factors influencing explanation stability. To this end, we conduct rigorous theoretical analysis to demonstrate that model curvature, weight decay parameters while training, and the magnitude of the dataset shift are key factors that determine the extent of explanation (in)stability. Extensive experimentation with real-world datasets not only validates our theoretical results, but also demonstrates that the aforementioned factors dramatically impact the stability of explanations produced by various state-of-the-art methods.\n\nSlides: https://www.auai.org/uai2023/oral_slides/517-oral-slides.pdf",
3+
"duration": 1537,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://proceedings.mlr.press/v216/meyer23a.html",
13+
"url": "https://proceedings.mlr.press/v216/meyer23a.html"
14+
},
15+
{
16+
"label": "https://www.auai.org/uai2023/oral_slides/517-oral-slides.pdf",
17+
"url": "https://www.auai.org/uai2023/oral_slides/517-oral-slides.pdf"
18+
}
19+
],
20+
"speakers": [
21+
"Anna P. Meyer",
22+
"Dan Ley",
23+
"Suraj Srinivas",
24+
"Himabindu Lakkaraju"
25+
],
26+
"tags": [],
27+
"thumbnail_url": "https://i.ytimg.com/vi/TTGw8145RD0/maxresdefault.jpg",
28+
"title": "On Minimizing the Impact of Dataset Shifts on Actionable Explanations",
29+
"videos": [
30+
{
31+
"type": "youtube",
32+
"url": "https://www.youtube.com/watch?v=TTGw8145RD0"
33+
}
34+
]
35+
}
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
{
2+
"description": "\"Human-in-the-Loop Mixup\" \nKatherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley C. Love, Adrian Weller\n(https://proceedings.mlr.press/v216/collins23a.html)\n\nAbstract\nAligning model representations to humans has been found to improve robustness and generalization. However, such methods often focus on standard observational data. Synthetic data is proliferating and powering many advances in machine learning; yet, it is not always clear whether synthetic labels are perceptually aligned to humans \u2013 rendering it likely model representations are not human aligned. We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration. We design a comprehensive series of elicitation interfaces, which we release as HILL MixE Suite, and recruit 159 participants to provide perceptual judgments along with their uncertainties, over mixup examples. We find that human perceptions do not consistently align with the labels traditionally used for synthetic points, and begin to demonstrate the applicability of these findings to potentially increase the reliability of downstream models, particularly when incorporating human uncertainty. We release all elicited judgments in a new data hub we call H-Mix.\n\nSlides: https://www.auai.org/uai2023/oral_slides/256-oral-slides.pdf",
3+
"duration": 1572,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://proceedings.mlr.press/v216/collins23a.html",
13+
"url": "https://proceedings.mlr.press/v216/collins23a.html"
14+
},
15+
{
16+
"label": "https://www.auai.org/uai2023/oral_slides/256-oral-slides.pdf",
17+
"url": "https://www.auai.org/uai2023/oral_slides/256-oral-slides.pdf"
18+
}
19+
],
20+
"speakers": [
21+
"Katherine M. Collins",
22+
"Umang Bhatt",
23+
"Weiyang Liu",
24+
"Vihari Piratla",
25+
"Ilia Sucholutsky",
26+
"Bradley C. Love",
27+
"Adrian Weller"
28+
],
29+
"tags": [],
30+
"thumbnail_url": "https://i.ytimg.com/vi/Ar9sYfl7gQA/maxresdefault.jpg",
31+
"title": "Human-in-the-Loop Mixup",
32+
"videos": [
33+
{
34+
"type": "youtube",
35+
"url": "https://www.youtube.com/watch?v=Ar9sYfl7gQA"
36+
}
37+
]
38+
}
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
{
2+
"description": "\"Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?\" \nYusuf Sale, Michele Caprio, Eyke H\u00fcllermeier \n(https://proceedings.mlr.press/v216/sale23a.html)\n\nAbstract\nAdequate uncertainty representation and quantification have become imperative in various scientific disciplines, especially in machine learning and artificial intelligence. As an alternative to representing uncertainty via one single probability measure, we consider credal sets (convex sets of probability measures). The geometric representation of credal sets as d-dimensional polytopes implies a geometric intuition about (epistemic) uncertainty. In this paper, we show that the volume of the geometric representation of a credal set is a meaningful measure of epistemic uncertainty in the case of binary classification, but less so for multi-class classification. Our theoretical findings highlight the crucial role of specifying and employing uncertainty measures in machine learning in an appropriate way, and for being aware of possible pitfalls.\n\nSlides: https://www.auai.org/uai2023/oral_slides/482-oral-slides.pdf",
3+
"duration": 1521,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://www.auai.org/uai2023/oral_slides/482-oral-slides.pdf",
13+
"url": "https://www.auai.org/uai2023/oral_slides/482-oral-slides.pdf"
14+
},
15+
{
16+
"label": "https://proceedings.mlr.press/v216/sale23a.html",
17+
"url": "https://proceedings.mlr.press/v216/sale23a.html"
18+
}
19+
],
20+
"speakers": [
21+
"Yusuf Sale",
22+
"Michele Caprio",
23+
"Eyke Hüllermeier"
24+
],
25+
"tags": [],
26+
"thumbnail_url": "https://i.ytimg.com/vi/em-IA8DCdMk/hqdefault.jpg?sqp=-oaymwEmCOADEOgC8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGGUgZShlMA8=&rs=AOn4CLDi8Zi1_5qfFBIe_TGHVOVkTKLYXA",
27+
"title": "Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?",
28+
"videos": [
29+
{
30+
"type": "youtube",
31+
"url": "https://www.youtube.com/watch?v=em-IA8DCdMk"
32+
}
33+
]
34+
}
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
{
2+
"description": "\"Parity Calibration\" \nYoungseog Chung, Aaron Rumack, Chirag Gupta \n(https://proceedings.mlr.press/v216/chung23a.html)\n\nAbstract\nIn a sequential regression setting, a decision-maker may be primarily concerned with whether the future observation will increase or decrease compared to the current one, rather than the actual value of the future observation. In this context, we introduce the notion of parity calibration, which captures the goal of calibrated forecasting for the increase-decrease (or \u201cparity\") event in a timeseries. Parity probabilities can be extracted from a forecasted distribution for the output, but we show that such a strategy leads to theoretical unpredictability and poor practical performance. We then observe that although the original task was regression, parity calibration can be expressed as binary calibration. Drawing on this connection, we use an online binary calibration method to achieve parity calibration. We demonstrate the effectiveness of our approach on real-world case studies in epidemiology, weather forecasting, and model-based control in nuclear fusion.\n\nSlides: https://www.auai.org/uai2023/oral_slides/631-oral-slides.pdf",
3+
"duration": 1326,
4+
"language": "eng",
5+
"recorded": "2023-07-31",
6+
"related_urls": [
7+
{
8+
"label": "Conference Website",
9+
"url": "https://www.auai.org/uai2023/"
10+
},
11+
{
12+
"label": "https://proceedings.mlr.press/v216/chung23a.html",
13+
"url": "https://proceedings.mlr.press/v216/chung23a.html"
14+
},
15+
{
16+
"label": "https://www.auai.org/uai2023/oral_slides/631-oral-slides.pdf",
17+
"url": "https://www.auai.org/uai2023/oral_slides/631-oral-slides.pdf"
18+
}
19+
],
20+
"speakers": [
21+
"Youngseog Chung",
22+
"Aaron Rumack",
23+
"Chirag Gupta"
24+
],
25+
"tags": [],
26+
"thumbnail_url": "https://i.ytimg.com/vi/lDvIIYPJT6Q/maxresdefault.jpg",
27+
"title": "Parity Calibration",
28+
"videos": [
29+
{
30+
"type": "youtube",
31+
"url": "https://www.youtube.com/watch?v=lDvIIYPJT6Q"
32+
}
33+
]
34+
}

0 commit comments

Comments
 (0)