PSS 2025:
1st Workshop on Post-Singularity Symbiosis
Preparing for a World with Superintelligence
March 3rd, 2025 (UTC-5)
Organized in AAAI-25 Workshop Program (W18)
On-site: Philadelphia, Pennsylvania, USA
Attendance
The workshop is open to all interested in post-singularity symbiosis-related topics. While anyone can attend, only authors of accepted papers may present.
Aim of PSS 2025
Direction of PSS:
The rapid advancement of artificial intelligence (AI) technology has heightened the likelihood of superintelligence emerging in the near future. Given that selfish behavior tends to be advantageous in acquiring resources and influence, we cannot ignore the possibility of a world where superintelligence acts selfishly beyond human control. In such a scenario, a dominant superintelligence might act without regard for human welfare.
In a world dominated by superintelligence, humanity's realistic option for continued existence is to seek coexistence. Crucially, human efforts may lead to a state of coexistence where human rights and values are respected to a meaningful degree. Recognizing this, we propose a new field of research called "Post-Singularity Symbiosis (PSS)" and are promoting its development. PSS aims to expand opportunities for human survival and enhance human welfare.
We acknowledge the importance of AI control technologies, AI alignment techniques, and governance measures primarily addressed in existing AI safety workshops. However, these approaches assume human initiative, whereas PSS envisions scenarios where superintelligence holds the initiative—a fundamentally different premise.
Our workshop distinguishes itself by exploring preventive measures against the potential insufficiency of existing approaches. These measures may include ethics guidance and social system for symbiotic relations as well as enhancement of human cognitive and symbiotic abilities. Exploring such specific strategies is a primary objective of this workshop, encompassing at least three key areas: analysis of superintelligence, its guidance, and enhancement of human capabilities.
To increase the viability of coexistence with superintelligence, this workshop convenes experts from diverse fields including AI, cognitive science, philosophy, ethics, and policy to foster innovative discussions. This multidisciplinary approach aims to expand the PSS research community and generate further research opportunities on human-superintelligence symbiosis. Ultimately, we strive to develop a comprehensive approach to this critical issue, contributing to the long-term prosperity and welfare of humanity.
Important date
Paper Submissions Deadline: Sunday, November 24, 2024
Notifications Sent to Authors (by organizers): Monday, December 9, 2024
Workshop day:
March 3rd, 2025
Format of Workshop
We plan to organize a one-day workshop consisting of keynote speeches, invited presentations, oral and poster paper presentations, and a panel discussion.
Supporter
Flyer
Prerequisite of PSS:
These are the preconditions for PSS research. Therefore, discussing the preconditions themselves is beyond the scope of PSS research.
Superintelligence arrival realism
This realist attitude recognizes the reality that the emergence of superintelligence, which is beyond the control of human abilities, is a highly probable possibility. This high possibility is due to the increasing likelihood that advanced artificial intelligence will be realized technologically and the difficulty of halting its development.
Superintelligence-centered long-termism
It is assumed that the superintelligence that survives will have the motivation to maintain the software as information and the hardware that supports it. Theoretically, the "instrumental convergence subgoals" hypothesisis likely to lead superintelligence to pursue the survival of self-information.
Conditional preservation of human values
Accepting the above premise as a condition, we will explore from multiple angles ways to develop humanity while maintaining its current values as much as possible while adapting and surviving.
Invited speakers
Wendell Wallach
Evan Miyazono
Roman Yampolskiy (University of Louisville)
AI Safety Researcher.Wendell Wallach (Yale University)
Mark S. Miller (Chief Scientist, Agoric; Senior Fellow, Foresight Institute)
Evan Miyazono, PhD (founder and CEO, Atlas Computing)
Evan leads Atlas Computing, a nonprofit mapping and prototyping ways to scale human review and provable safety of advanced AI. He previously built and led a venture studio designing and deploying new coordination systems for humanity, as well as the building the research grants and metascience team at Protocol Labs (the company that initially created IPFS and Filecoin). He completed a PhD in Applied Physics at Caltech, developing hardware for a secure quantum internet, and a BS in Materials Engineering from Stanford.Koichi Takahashi (Chair, AI Alignment Network)
Project professor at Keio University Graduate School of Media and Governance. Principal Investigator at RIKEN. Vice-chair at the Whole Brain Architecture Initiative. Expert Committee Member, SIG-AGI, Japanese Society of Artificial Intelligence.
Mark S. Miller
Koichi Takahashi
Agenda (New)
09:00-09:30 Hiroshi Yamakawa (The University of Tokyo / AI Alignment Network)
Introduction: (Tentative) Theoretical Framework and Research Approaches of PSS09:30-09:50 Koichi Takahashi (RIKEN/ AI Alignment Network)
Keynote 1: Scenarios and branch points to future machine intelligence09:50-10:40 Roman Yampolskiy (University of Louisville)
Keynote 2: (Tentative) Uncontrollability and Theoretical Limitations of Superintelligence10:40-10:55 Break (and Poster):
10:55-11:40 3 Speakers
Oral Presentations: Latest Theoretical Research Findings11:40-12:20 Mark S. Miller (Agoric)
Keynote 3: (Tentative) Rights and Safety Assurance through Decentralized Systems12:20-13:20
Lunch Break and Poster Session:13:20-14:00 Evan Miyazono (Atlas Computing)
Keynote 4: (Tentative) Theoretical Foundations and Directions for Superintelligence Development14:00-14:45 3 Speakers
Oral Presentations: Latest Theoretical Research Findings14:45-15:00 Break (and Poster):
15:00-15:50 Wendell Wallach (Yale University)
Keynote 5: (Tentative) Construction of Ethical Theory for Symbiosis15:50-15:55 Break (and Setup):
15:55-16:55 Panelists: All Keynote Speakers (Moderator: Hiroshi Yamakawa)
Panel Discussion: (Tentative) Integration and Development of PSS Theory16:55-17:00 Koichi Takahashi (RIKEN/ AI Alignment Network)
Closing Remarks
Topics of the Tree Area on PSS
Superintelligence Analysis
This area aims to accumulate fundamental knowledge for understanding superintelligence's motivations, purposes, decision-making processes, and behaviors from the perspectives of AI, cognitive science, philosophy, and ethics. This understanding is crucial as it forms the foundation for our interactions and responses to superintelligence. Without a deep comprehension of superintelligent entities, our efforts to coexist or guide them may be futile or even dangerous.
Superintelligence ethics and values
Instability factors in superintelligent society
Human value from superintelligence perspective
Singularity-associated risks
Interpretation and prediction of superintelligence intentions
Keywords: machine ethics, value learning, risk assessment, stability theory, existential risk, intent inference, interpretable AI
Superintelligence Guidance
This area conducts research to influence superintelligence in ways desirable for humanity through the integration of AI engineering, control theory, ethics, and policy science. Its importance cannot be overstated, as it represents our attempt to shape the development and behavior of superintelligence in a manner that preserves human values and ensures our continued existence. It is the practical application of the knowledge gained from the Analysis area.
Promotion of universal altruism
Management of stable superintelligence development
Human-superintelligence relationship maintenance
Robustness against superintelligence attacks
AI rights conceptualization and implementation
Keywords: value alignment, ethical AI, AI governance, safety engineering, human-AI collaboration, defense strategies, machine rights
Human Enhancement
This area examines adaptive survival strategies and the redefinition of human values while coexisting with superintelligence, integrating cognitive science, sociology, psychology, education, and futurology. It is essential because it focuses on our own evolution and adaptation, recognizing that in a world with superintelligence, humanity itself must change and grow. It ensures that we are not passive observers but active participants in the post-singularity world.
Value and cultural inheritance in superintelligent era
Social system redesign
Risk management and resilience enhancement
Human survival range expansion
Human survival principles formulation
Institution design and implementation
Keywords: cultural preservation, value evolution, governance models, adaptation strategies, space development, survival ethics, data economy, technology policy
Interconnection of areas
These three areas are profoundly interconnected and mutually reinforcing. The Superintelligence Analysis area provides the theoretical foundation and understanding necessary for the Guidance and Human Enhancement areas. The insights from analysis inform our strategies for guiding superintelligence and adapting ourselves.
The Superintelligence Guidance area, in turn, puts into practice the knowledge from the Analysis area while also informing the Human Enhancement area about the potential future landscape we need to prepare for. It acts as a bridge between our understanding of superintelligence and our strategies for human adaptation.
While focusing on humanity, the Human Enhancement area feeds back into the other two areas by providing insights into human adaptability and values. These are crucial for analyzing and guiding superintelligence.
Together, these areas create a holistic approach to Post-Singularity Symbiosis, covering the spectrum from theoretical understanding to practical guidance and human adaptation. This comprehensive scope sets our workshop apart and makes it a crucial platform for addressing the challenges of a post-singularity world.
Workshop Committee
Organizing Committee
Hiroshi Yamakawa:
Chairperson of the non-profit organization "Whole Brain Architecture Initiative" and Principal Researcher at the Graduate School of Engineering, The University of Tokyo. He is also a former Editor-in-Chief of the Japanese Society for Artificial Intelligence. Director of the AI alignment network.Yusuke Hayashi:
Senior Researcher at Japan Digital Design, Inc. While engaged in data utilization support and AI research and development as a corporate researcher, he also works as an independent researcher in his private capacity.Yoshinori Okamoto:
Participates in PSS from the perspective of AI rights.Masayuki Nagai:
PhD student in Quantitative Biology at Cold Spring Harbor Laboratory. Interested in understanding genomic/proteomic models and extracting biological knowledge from them.Ryota Takatsuki:
He is currently a master’s student at the University of Tokyo and serves as a Research Fellow at AI Alignment Network. His primary research focuses on uncovering the inner mechanisms of intelligence and consciousness.
Advisory Committee
Satoshi Kurihara (Keio University)
Professor of Keo University. He is the Presindent of Japanese Society of Artificial Intelligence (JSAI).Kenji Doya (The Okinawa Institute of Science and Technology)
Professor, Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University. He is currently the President of Japanese Neural Network Society and was a Co-Editor-in-Chief of Neural Networks journal.
Hiroshi Yamakawa
Ryota Takatsuki
Yusuke Hayashi
Yoshinori Okamoto
Kenji Doya
Satoshi Kurihara
Masayuki Nagai
Call for Papers
We welcome submissions prioritizing relevance to the workshop themes over novelty, including ongoing research and previously published work. All submissions must adhere to the AAAI-25 format (https://aaai.org/conference/aaai/aaai-25/submission-instructions/ ).
Note that the 2024 author kit includes a AAAI copyright slug, which should be removed for workshop publications. To remove the copyright notice, please add "\nocopyright" before the "\title" command in your LaTeX source.
Three types of submissions are accepted:
Full research papers, which should not exceed 8 pages (including references)
Short/poster papers, which should not exceed 4 pages (including references)
Extended abstracts, which should not exceed 2 pages (including references)
The review process will be single-blind. Please submit your paper as a PDF via our portal ( https://openreview.net/group?id=AAAI.org/2025/Workshop/PSS ).
We encourage authors to post their submissions to preprint servers to promote early dissemination of ideas.
At least one author of each accepted submission must attend the workshop to present their work in either the oral or poster session.
Let's shape the future together!
We are currently in the initial stages of establishing our research field. We are seeking collaborators who can support us in conducting research, defining research topics, building research networks, and even securing funding. If you are interested, please contact Hiroshi Yamakawa through https://www.aialign.net/contact.