Podcast #1: The AI Alignment Problem and Human Extinction

I’m excited to announce a new weekly podcast named OpenAI Changes Everything devoted to all topics related to OpenAI. You can subscribe to the podcast on YouTube, Apple Podcasts and (coming soon) Spotify, Amazon Music, Audible, and TuneIn.

The first episode is devoted to the topic of the AI Alignment Problem and it is available to listen/view right now:

The AI Alignment Problem and Human Extinction

Mako Yass

I was fortunate to have Mako Yass as my first podcast guest. Mako was the winner of the Future of Life Institute‘s World Building Competition.  He won the competition for “designing visions of a plausible, aspirational future that includes strong artificial intelligence”. You can view Mako’s entry here.

Given the existential risk that the AI Alignment Problem poses for humanity, you might think it surprising that the Future of Life Institute would promote a competition that explores embracing Artificial General Intelligence. After all, we wouldn’t want a competition that celebrates a happy human future in which humans coexist peacefully with dangerous technologies like mustard gas, engineered killer viruses, or nuclear warheads. 

However, I encourage you to listen to the podcast. Mako is cautious in his optimism (in fact, as you can hear in the podcast, I struggled to get him to admit to being an optimist about the future of AI). And, to be fair, the Future of Life Institute famously requested a pause on all giant AI experiments, which was unfortunately ignored.

Over the course of our conversation, Mako convinced me that the competition has a worthwhile goal: We can’t hope to reach a positive future unless we can imagine it. While Artificial General Intelligence presents serious risks to humanity, it also has the potential to dramatically increase human happiness. The invention of fire, despite its dangers, has had a profoundly positive impact on human life, and the ability to cook food might have even enabled the development of the human brain.

Stories

Mako is both a philosopher and a fiction writer and he included stories about life in the ideal AI future as part of his competition entry. These stories help to make the future that Mako envisions more concrete. I’ve included links to his stories here:

You can read additional stories that Mako wrote here:

Not all of Mako’s stories promote embracing Artificial General Intelligence as quickly as possible. I recommend that you read his story Being Patient with Challenging Reading for an exploration of a more measured, deliberate approach to adopting new technology (the characters in his story happily wait 1,269 years to cautiously adopt human immortality).

Conclusion

I would love to get your feedback on the podcast in the comments below. This is the first of 6 podcasts that I am releasing over the next few weeks. Future episodes are devoted to topics such as AI Writing Detection, Responsible AI, and AI in Education. Don’t forget to subscribe!

One thought on “Podcast #1: The AI Alignment Problem and Human Extinction

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest OpenAI programming tips delivered directly to your inbox.