The Wide Boundary Impacts of AI with Daniel Schmachtenberger | TGS 132

9,084
0
Published 2024-07-17
(Conversation recorded on June 27th, 2024)

Show Summary:

Artificial intelligence has been advancing at a break-neck pace. Accompanying this is an almost frenzied optimism that AI will fix our most pressing global problems, particularly when it comes to the hype surrounding climate solutions.

In this episode, Daniel Schmachtenberger joins Nate to take a wide-boundary look at the true environmental risks embedded within the current promises of artificial intelligence. He demonstrates that the current trajectory of AI’s impact is headed towards ecological destruction, rather than restoration… an important narrative currently missing from the discourse surrounding AI at large.

What are the environmental implications of a tool with unbound computational capabilities aimed towards goals of relentless growth and extraction? How could artificial intelligence play into the themes of power and greed, intensifying inequalities and accelerating the fragmentation of society? What role could AI play under a different set of values and expectations for the future that are in service to the betterment of life?

We encourage you to explore the resources and research from The Civilization Research Institute on artificial intelligence compiled in this document: static1.squarespace.com/static/61d5bc2bb737636144d…

About Daniel Schmactenberger:

Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.

The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.

Towards these ends, Daniel has a particular interest in catastrophic and existential risk, with focuses on civilization collapse and institutional decay. His work also includes an analysis of progress narratives, collective action problems, and social organization theories. These themes are all connected through close study of the relevant domains in philosophy and science.

Read the Development in Progress Paper:
consilienceproject.org/development-in-progress/

For Show Notes and More visit:
www.thegreatsimplification.com/episode/132-daniel-…

00:00:00 - Introduction
00:04:01 - AI's Potential Benefits for the Environment
00:09:53 - Public Perception of AI and the Environment
00:11:24 - AI in Defense and Surveillance
00:17:09 - Existential Risks and Motivations
00:18:51 - AI and the Environment: A Broader Impact
00:29:55 - Environmental Concern in the AI Space
00:36:46 - What This Be Stopped?
00:46:32 - Energy and Material Demands of AI
00:57:14 - Tech Solutions vs. Systemic Changes
01:13:48 - AI and Energy Limits
01:21:12 - AI and the Superorganism
01:27:35 - Targeted AI Campaigns
01:33:27 - Precautionary Principle
01:40:21 - Closing Thoughts

To support ISEOF visit: www.thegreatsimplification.com/support

All Comments (21)
  • @bradbear
    Schmach talks are my fav! Our inability to acknowledge our dark side and overcome it combined with the lack of imagination to predict unintended consequences seem to be at the root of so many of societies big problems. It seems like we may be in a spiritual crisis but the materialists keep doubling down.
  • @citris1
    I grew up watching television. I watched it for years and never saw anyone of this level of intelligence appear on it. Thank God for the Internet.
  • @treefrog3349
    While listening to Daniel Schmachtenberger's assessment of the our current AI environment I found myself recalling Mickey Mouse and the "Sorcerer's Apprentice" cartoon. A little bit of knowledge, coupled with a smattering of innocent naiveté, and a huge amount of hubris led to a shocking outcome for poor Mickey. I sometimes wonder if homo sapiens are a morality-tale-in-the- making that no one will ever hear?
  • @TennesseeJed
    Daniel always makes my thinking machine work hard.
  • @mattvm00
    The Daniel episodes are dharmic nourishment for my soul...to understand the world and act and love more deeply.
  • @anthonytroia1
    I've gathered Daniel is not* a Luddite or a transhumanist. I'm also aware that his primary niche in the world is forecasting x-risk, therefore, by default he must converse primarily* about shit he doesn't want to happen (risks). So I am left wondering: "What does* Daniel want?" What does Schmachtenberger hold as a best-case scenario? What does the perfect constellation of responses portend?  This is a vital question to the survival of life on Earth. Premodern civilizations were coordinated by an array of highly galvanizing narratives (Abrahamic religions, Hinduism, Buddhism, etc....) with clear objectives. Modernity had it's unique galvanizing narratives (science, capitalism, consumerism, communism, etc...) with clear objectives. While we still look towards these narratives for guidance their coordinating capacity has been largely outstripped.  Our stories have atrophied.   We need a new coordinating narrative NOW*.  In the wake of articulating what is most important a new narrative may emerge.  Hence, asking folks as intelligent and wise* as Daniel "What do we want to want? What do we want to become?" is paramount.  This is a global conversation whose time has come.
  • @xj8713
    Daniel's scenario where AGI doesn't face natural limits and it goes poorly is thoroughly examined in Charles Stross' 2001 novel Accelerando, which he made freely available on his site a few years ago. His other fiction is also pretty good.
  • @mistercohaagen
    "Jevons paradox" seems to be a side effect of Capitalism; where every new person needs to "earn a living" by contriving up with some kind of "bullshit job". As long as everyone's survival needs are convoluted and abstracted through this needlessly competitive market system, it will generate this constant overhead of waste that wouldn't otherwise be necessary if we were simply permitted to meet our survival requirements directly.
  • @therealdesidaru
    Google pulled 13% more power (when it promised to REDUCE it's energy consumption) because of it's AI development. That's also how much more our oil production went up.
  • @ajay4319
    My favourite duo with a new episode!
  • "Not a science fiction novel but a Steven King novel' 😂😂 Funniest thing you've said yet Nate.
  • The hubris of AI inspired futures explored here reminds me of William Ophul’s ‘Immoderate Greatness: Why Civilizations Fail’ whereby the very arrogance that we can be God-like becomes our undoing.
  • @RickDelmonico
    "We are liquid crystals playing quantum jazz." Mae-Wan Ho
  • @nburns7274
    Last year, "The Verge" posted an article saying Microsoft wants to build next-generation nuclear reactors to power its data centers and AI ambitions. Is that their idea of safe, energy-saving technology?
  • "Ubiquitous Technological Surveillance" is mentioned in the video starting at 38:33. The "Visible Light Spectrum" should be called the "Human-Visible Light Spectrum" (380 to 700 nanometers). If humans could see microwaves they could see through walls. Microwave cameras do exist and they can see through walls. They convert the Microwave wavelengths into the very narrow band of wavelengths that humans can see. Humans are tool-using and tool-making generalists. Any material, or thought, that can be turned into a "useful" tool will be. This is not limited to tools that do good things. It includes tools that do bad things; or tools that society or government or slices of government decided are necessary, whether deemed good or bad depending on viewpoint. The millimeter wave RBIT (Remote Biometric Identification and Tracking) system developed by the Argonne National Laboratory, and similar systems, might be examples of this.
  • A deep bow to you both Nate and Daniel--thank you for this hard to watch podcast. I had that same feeling in the pit of my stomach at about the same time as Nate did and I have been exposed to X-Risk and this space for a long time. Indeed, dreaming about waterfalls and children playing; why else would we care and hold this space and take the heroes journey. The sci-fi book that this conversation reminded me of was Frank Herbert's Dune series and the Buttlerian Jihad, where after decades long battles with cyborgs humanity outlawed thinking machines. And yes Nate, the people of the Shire might just have a thing or two to contribute, (that brought a tear to my eye) so thank you for staying true and helping to help us envision the world we all want to live in.
  • @emceegreen8864
    Dizzying and sickening. AI in service to Moloch is the ultimate dystopia and destruction of a pro social position. An AI optimized for the Living Planet and restoration requires an economy that supports it.
  • @menelikiii5004
    Kinda disappointed with the length of the podcast, but in all honesty, you guys spoiled us last time, great podcast.