The pace of Artificial Intelligence development has reached a natural crescendo. Tools such as GPT-4, Gemini, Claude, etc., and their makers all claim to soon be able to change every facet of society, from healthcare and education to finance and entertainment. This rapid evolution raises evermore critical questions about AI’s trajectory: the technology’s benefits, yes, but also (mostly!) the potential risks it poses to us all.

Under these circumstances, listening, understanding, and heeding experts’ perspectives becomes crucial. A recent survey titled “Thousands of AI Authors on the Future of AI represents the most extensive effort to gauge the opinions of such specialists on AI’s potential. Conducted by Katja Grace and her team at AI Impacts, in collaboration with researchers from the University of Bonn and the University of Oxford, the study surveyed 2,778 researchers, seeking their predictions on AI progress and its social impacts. All contacted had previously written peer-reviewed papers in top-tier AI venues.

Key takeaways from the future of AI's study

In short, the survey highlights the sheer complexity and breadth of expectations and concerns among AI researchers regarding the technology’s future… and its societal impacts.

What do we do with that information?

The way forward is pretty clear: governments, the world over, need to increase funding for AI safety research and develop robust mechanisms for ensuring AI systems align with current and future human values and interests.

The UK government recently announced £50M+ in funding for a range of artificial intelligence-related projects, including £30 million for the creation of a new responsible AI ecosystem. The idea is to build tools and programs that ensure responsible and trustworthy applications of AI capabilities.

Meanwhile, the Biden-Harris Administration announced in early 2024 the formation of the U.S. AI Safety Institute Consortium (AISIC), bringing together over 200 AI stakeholders, including industry leaders, academia, and civil society. This consortium aims to support the development and deployment of safe and trustworthy AI by developing guidelines for red-teaming, capability evaluations, risk management, and other critical safety measures​.

These are a start, but all too national ones.

Governments can’t just look at their own backyard today. We also need to implement INTERNATIONAL regulations to guide the ethical development and deployment of AI technologies, ensuring transparency and accountability. This includes fostering interdisciplinary and INTERNATIONAL collaborations among AI researchers, ethicists, and policymakers. I’ll feel safer in the world when I see the following being rolled out to strengthen and improve existing Human Rights frameworks:

Too soon to draw conclusions

It’s maybe a little early to fall prey to doomerism. While the survey provides valuable insights, it has limitations, including potential biases from self-selection among participants and the (obvious!) challenge of accurately forecasting technological advancements. Further research should aim to expand the diversity of perspectives and explore the implications of AI development in specific sectors.

In the end, and regardless of the accuracy of the predictions made, we need more than words. AI is a source of both unprecedented opportunities and significant challenges. Through open dialogue among researchers, policymakers, and the public, we must create rules to safeguard us from AI’s danger, and steer us towards a better future for all.

The world is very big, and we are very small. Good luck out there.