PSS 2025:

1st Workshop on Post-Singularity Symbiosis

Preparing for a World with Superintelligence

March 3rd, 2025 (UTC-5)
Organized in AAAI-25 Workshop Program (W18)

On-site: Philadelphia, Pennsylvania, USA

Aim of PSS 2025

Direction of PSS:

The rapid advancement of artificial intelligence (AI) technology has heightened the likelihood of superintelligence emerging in the near future. Given that selfish behavior tends to be advantageous in acquiring resources and influence, we cannot ignore the possibility of a world where superintelligence acts selfishly beyond human control. In such a scenario, a dominant superintelligence might act without regard for human welfare.

In a world dominated by superintelligence, humanity's realistic option for continued existence is to seek coexistence. Crucially, human efforts may lead to a state of coexistence where human rights and values are respected to a meaningful degree. Recognizing this, we propose a new field of research called "Post-Singularity Symbiosis (PSS)" and are promoting its development. PSS aims to expand opportunities for human survival and enhance human welfare.

We acknowledge the importance of AI control technologies, AI alignment techniques, and governance measures primarily addressed in existing AI safety workshops. However, these approaches assume human initiative, whereas PSS envisions scenarios where superintelligence holds the initiative—a fundamentally different premise.

Our workshop distinguishes itself by exploring preventive measures against the potential insufficiency of existing approaches. These measures may include ethics guidance and social system for symbiotic relations as well as enhancement of human cognitive and symbiotic abilities. Exploring such specific strategies is a primary objective of this workshop, encompassing at least three key areas: analysis of superintelligence, its guidance, and enhancement of human capabilities.

To increase the viability of coexistence with superintelligence, this workshop convenes experts from diverse fields including AI, cognitive science, philosophy, ethics, and policy to foster innovative discussions. This multidisciplinary approach aims to expand the PSS research community and generate further research opportunities on human-superintelligence symbiosis. Ultimately, we strive to develop a comprehensive approach to this critical issue, contributing to the long-term prosperity and welfare of humanity.

Important date

  • Paper Submissions Deadline: Sunday, November 24, 2024

  • Notifications Sent to Authors (by organizers): Monday, December 9, 2024

  • Workshop day:
    March 3rd, 2025

Format of Workshop

We plan to organize a one-day workshop consisting of keynote speeches, invited presentations, oral and poster paper presentations, and a panel discussion.

Supporter

Flyer

Prerequisite of PSS:

These are the preconditions for PSS research. Therefore, discussing the preconditions themselves is beyond the scope of PSS research.

  • Superintelligence arrival realism

    This realist attitude recognizes the reality that the emergence of superintelligence, which is beyond the control of human abilities, is a highly probable possibility. This high possibility is due to the increasing likelihood that advanced artificial intelligence will be realized technologically and the difficulty of halting its development.

  • Superintelligence-centered long-termism

    It is assumed that the superintelligence that survives will have the motivation to maintain the software as information and the hardware that supports it. Theoretically, the "instrumental convergence subgoals" hypothesisis likely to lead superintelligence to pursue the survival of self-information.

  • Conditional preservation of human values

    Accepting the above premise as a condition, we will explore from multiple angles ways to develop humanity while maintaining its current values ​​as much as possible while adapting and surviving.

Topics of the Tree Area on PSS

Superintelligence Analysis

This area aims to accumulate fundamental knowledge for understanding superintelligence's motivations, purposes, decision-making processes, and behaviors from the perspectives of AI, cognitive science, philosophy, and ethics. This understanding is crucial as it forms the foundation for our interactions and responses to superintelligence. Without a deep comprehension of superintelligent entities, our efforts to coexist or guide them may be futile or even dangerous.

  • Superintelligence ethics and values

  • Instability factors in superintelligent society

  • Human value from superintelligence perspective

  • Singularity-associated risks

  • Interpretation and prediction of superintelligence intentions

Keywords: machine ethics, value learning, risk assessment, stability theory, existential risk, intent inference, interpretable AI

Superintelligence Guidance

This area conducts research to influence superintelligence in ways desirable for humanity through the integration of AI engineering, control theory, ethics, and policy science. Its importance cannot be overstated, as it represents our attempt to shape the development and behavior of superintelligence in a manner that preserves human values and ensures our continued existence. It is the practical application of the knowledge gained from the Analysis area.

  • Promotion of universal altruism

  • Management of stable superintelligence development

  • Human-superintelligence relationship maintenance

  • Robustness against superintelligence attacks

  • AI rights conceptualization and implementation

    Keywords: value alignment, ethical AI, AI governance, safety engineering, human-AI collaboration, defense strategies, machine rights

Human Enhancement

This area examines adaptive survival strategies and the redefinition of human values while coexisting with superintelligence, integrating cognitive science, sociology, psychology, education, and futurology. It is essential because it focuses on our own evolution and adaptation, recognizing that in a world with superintelligence, humanity itself must change and grow. It ensures that we are not passive observers but active participants in the post-singularity world.

  • Value and cultural inheritance in superintelligent era

  • Social system redesign

  • Risk management and resilience enhancement

  • Human survival range expansion

  • Human survival principles formulation

  • Institution design and implementation

    Keywords: cultural preservation, value evolution, governance models, adaptation strategies, space development, survival ethics, data economy, technology policy

Interconnection of areas

These three areas are profoundly interconnected and mutually reinforcing. The Superintelligence Analysis area provides the theoretical foundation and understanding necessary for the Guidance and Human Enhancement areas. The insights from analysis inform our strategies for guiding superintelligence and adapting ourselves.

The Superintelligence Guidance area, in turn, puts into practice the knowledge from the Analysis area while also informing the Human Enhancement area about the potential future landscape we need to prepare for. It acts as a bridge between our understanding of superintelligence and our strategies for human adaptation.

While focusing on humanity, the Human Enhancement area feeds back into the other two areas by providing insights into human adaptability and values. These are crucial for analyzing and guiding superintelligence.

Together, these areas create a holistic approach to Post-Singularity Symbiosis, covering the spectrum from theoretical understanding to practical guidance and human adaptation. This comprehensive scope sets our workshop apart and makes it a crucial platform for addressing the challenges of a post-singularity world.

Call for Papers

We welcome submissions prioritizing relevance to the workshop themes over novelty, including ongoing research and previously published work. All submissions must adhere to the AAAI-25 format (https://aaai.org/conference/aaai/aaai-25/submission-instructions/ ).

Three types of submissions are accepted:

  • Full research papers, which should not exceed 8 pages (including references)

  • Short/poster papers, which should not exceed 4 pages (including references)

  • Extended abstracts, which should not exceed 2 pages (including references)

The review process will be single-blind. Please submit your paper as a PDF via our portal ( https://openreview.net/group?id=AAAI.org/2025/Workshop/PSS ).
We encourage authors to post their submissions to preprint servers to promote early dissemination of ideas.

At least one author of each accepted submission must attend the workshop to present their work in either the oral or poster session.

Attendance

The workshop is open to all interested in post-singularity symbiosis-related topics.  While anyone can attend, only authors of accepted papers may present.  

Invited speakers

University of Louisville

AI Safety Researcher

Schedule (Tentative)

  •  09:00 - 09:30 (30 min) Introduction: "Overview and Importance of Post-Singularity Symbiosis (PSS)" by Hiroshi Yamakawa

  •  09:30 - 10:30 (60 min) Keynote Speech: "Uncontrollability of Superintelligence" (tentative) by Roman Yampolskiy

  •  10:30 - 10:45 (15 min) Break

  •  10:45 - 11:25 (40 min) Invited Talk 1: "Motivations and Behavior Prediction of Superintelligence: Current Status and Challenges" (tentative) TBD

  •  11:25 - 12:05 (40 min) Invited Talk 2: "Guiding Superintelligence Development Towards Human Desirability" (tentative) TBD

  •  12:05 - 13:15 (70 min) Lunch Break and Poster Session

  •  13:15 - 13:55 (40 min) Invited Talk 3: "Human Adaptation and Value Redefinition in the Era of Superintelligence" (tentative) TBD

  •  13:55 - 15:25 (90 min) Paper Presentations: Oral presentations of submitted papers (6 presentations × 15 min)

  •  15:25 - 15:40 (15 min) Break

  •  15:40 - 16:40 (60 min) Panel Discussion: "Challenges and Prospects of PSS: Perspectives from Three areas" (tentative)

  • (Panel featuring diverse experts covering the three areas)

  •  16:40 - 17:25 (45 min) General Discussion: "Future Directions of PSS Research and Building Collaborative Networks" (tentative)

  •  17:25 - 17:30 (5 min) Closing Remarks

Workshop Committee

Organizing Committee

  • Hiroshi Yamakawa:
    Chairperson of the non-profit organization "Whole Brain Architecture Initiative" and Principal Researcher at the Graduate School of Engineering, The University of Tokyo. He is also a former Editor-in-Chief of the Japanese Society for Artificial Intelligence. Director of the AI alignment network.

  • Yusuke Hayashi:
    Senior Researcher at Japan Digital Design, Inc. While engaged in data utilization support and AI research and development as a corporate researcher, he also works as an independent researcher in his private capacity.

    Yoshinori Okamoto:
    Participates in PSS from the perspective of AI rights.

  • Masayuki Nagai:
    PhD student in Quantitative Biology at Cold Spring Harbor Laboratory. Interested in understanding genomic/proteomic models and extracting biological knowledge from them.

  • Ryota Takatsuki:
    He is currently a master’s student at the University of Tokyo and serves as a Research Fellow at AI Alignment Network. His primary research focuses on uncovering the inner mechanisms of intelligence and consciousness.

Advisory Committee

  • Satoshi Kurihara (Keio University)
    Professor of Keo University. He is the Presindent of Japanese Society of Artificial Intelligence (JSAI).

  • Kenji Doya (The Okinawa Institute of Science and Technology)
    Professor, Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University. He is currently the President of Japanese Neural Network Society and was a Co-Editor-in-Chief of Neural Networks journal.

Hiroshi Yamakawa

Ryota Takatsuki

Yusuke Hayashi

Yoshinori Okamoto

Kenji Doya

Satoshi Kurihara

Masayuki Nagai

Let's shape the future together!

We are currently in the initial stages of establishing our research field. We are seeking collaborators who can support us in conducting research, defining research topics, building research networks, and even securing funding. If you are interested, please contact Hiroshi Yamakawa through https://www.aialign.net/contact.