About the event
In the second episode of ALIGN webinar series, we invite Tan Zhi Xuan, a PhD
candidate in the MIT Probabilistic Computing Project and Computational Cognitive
Science labs. Xuan’s research centers around Bayesian modeling, AI Alignment, and
cognitive science. Their research aims to investigate a range of questions:
“How can we specify rich yet structured generative models of human
reasoning, decision-making, and value formation?””How can we perform Bayesian inference over such models, in order to
accurately learn human goals, values, and cooperative norms?““How can we build AI systems that act reliably in accordance with these
inferred goals and norms?”
Xuan has extended their research on safe and moral AI to include policy and
governance, giving talks at the EAGx conference series on “AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy” (2020) as well as “What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment” (2022). In these talks, Xuan presents a holistic view of the challenges of pluralism in AI Alignment research.
We will hear from Xuan about important questions that lie in the intersection of AI Alignment, philosophical pluralism, and collective governance, current state and challenges, as well as potential approaches to align AI with our collective interests.
Date: Saturdary 18 May 2024 11:00 am-12:00 am (JST) = 10:00 - 11:00 am (SGT)
The talk (including the discussion part) will be recorded and posted on ALIGN’s Youtube Channel.
Event led by: Mari Izumikawa (AI Alignment Talk from Japan)
AI Alignment Talk from Japan is a student-led group aimed at raising awareness of AI Alignment among young researchers and university students. The present webinar episode is organized in collaboration with ALIGN to engage young researchers both on technical sides and policy-making sides of AI to encourage an interdisciplinary discussion surrounding the future of AI Alignment.
第2回目のALIGN Webinar Seriesではマサチューセッツ工科大学(MIT)Probabilistic Computing Project and Computational Cognitive Science labの博士課程Tan Zhi Xuan氏をお招きします。XuanさんはベイズモデリングやAIアライメント、認知科学を専門とされており、「どのようにして人間の推論、意思決定、価値形成に関する構造化された生成モデルを構築するか」「そのようなモデル上でベイズ推論を行い、人間の目標や価値観、協力規範を正確に学習するにはどうすればよいか?」「推論された目標や規範に応じて信頼性のあるAIシステムを構築するにはどうすれば良いか?」といった問いに向き合われています。
XuanさんはAIアライメントの技術面のみならず、政策やガバナンスにも強い関心を持たれています。“AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy” (2020)や“What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment” (2022)ではAIアライメント研究における多元主義の課題について論じています。
ALIGN Webinarシリーズ第2回(共催:AI Alignment Talk from Japan)では、AIアライメント、哲学的多元主義、ガバナンスの接点に位置する重要な問いをはじめ、現状とその課題、さらにはAIを私たち集団の利益に整合させるためのアプローチについてお話しいただきます。
日時:2024年5月18日(土)11:00 am-12:00 am (日本時間)
英語のイベントです(質疑は日本語でも可能)。
イベントは議論部分を含めて録画のうえ、ALIGNのYoutubeチャンネルにて公開予定です。
イベント企画:泉川茉莉(AI Alignment Talk from Japan)
Agenda
11:00-11:05 House keeping from ALIGN/AI Alignment Talk from Japan
11:05-11:35 Tan Zhi Xuan on the pluralism in AI Alignment: philosophy, governance, and research
11:35-11:55 Q&A and discussion with participants
11:55- closing
Selected readings
Tan, Zhi Xuan. “What Should AI Owe To Us? Accountable and Aligned AI Systems
via Contractualist AI Alignment”, (2022).Tan, Zhi Xuan. “AI Alignment, Philosophical Pluralism, and the Relevance of
Non-Western Philosophy”, (2021).同記事を踏まえたAI Alignment Talk from Japan記事
Kwon, Joe and Tan, Zhi Xuan, “When it is not out of line to get out of line: The role of
universalization and outcome-based reasoning in rule-breaking judgments”, (2023).Oldenburg, Ninell, Tan, Zhi Xuan, “Learning and Sustaining Shared Normative
Systems via Bayesian Rule Induction in Markov Games”, (2024)
Intended Audience
Anyone interested in AI Alignment/AI Safety research (not necessarily currently engaged in it).
Our hope is to invite many audience based in Japan and discuss field-building activities in the Japanese context, but nevertheless we welcome access from around the world.