theverge.com

Around the time J. Robert Oppenheimer learned that Hiroshima had been struck (alongside everyone else in the world) he began to have profound regrets about his role in the creation of that bomb. At one point when meeting President Truman Oppenheimer wept and expressed that regret. Truman called him a crybaby and said he never wanted to see him again. And Christopher Nolan is hoping that when Silicon Valley audiences of his film Oppenheimer (out June 21) see his interpretation of all those events they’ll see something of themselves there too.

After a screening of Oppenheimer at the Whitby Hotel yesterday Christopher Nolan joined a panel of scientists and Kai Bird, one of the authors of the book Oppenheimer is based on to talk about the film, American Prometheus. The audience was filled mostly with scientists, who chuckled at jokes about the egos of physicists in the film, but there were a few reporters, including myself, there too.

We listened to all too brief debates on the success of nuclear deterrence and Dr. Thom Mason, the current director of Los Alamos, talked about how many current lab employees had cameos in the film because so much of it was shot nearby. But towards the end of the conversation the moderator, Chuck Todd of Meet the Press, asked Nolan what he hoped Silicon Valley might learn from the film. “I think what I would want them to take away is the concept of accountability,” he told Todd.

“Applied to AI? That’s a terrifying possibility. Terrifying.”

He then clarified, “When you innovate through technology, you have to make sure there is accountability.” He was referring to a wide variety of technological innovations that have been embraced by Silicon Valley, while those same companies have refused to acknowledge the harm they’ve repeatedly engendered. “The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”

He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”

While Nolan didn’t refer to any specific company it isn’t hard to know what he’s talking about. Companies like Google, Meta and even Netflix are heavily dependent on algorithms to acquire and maintain audiences and often there are unforeseen and frequently heinous outcomes to that reliance. Probably the most notable and truly awful being Meta’s contribution to genocide in Myanmar.

“At least is serves as a cautionary tale.”

While an apology tour is virtually guaranteed now days after a company’s algorithm does something terrible the algorithms remain. Threads even just launched with an exclusively algorithmic feed. Occasionally companies might give you a tool, as Facebook did, to turn it off, but these black box algorithms remain, with very little discussion of all the potential bad outcomes and plenty of discussion of the good ones.

“When I talk to the leading researchers in the field of AI they literally refer to this right now as their Oppenheimer moment,” Nolan said. “They’re looking to his story to say what are the responsibilities for scientists developing new technologies that may have unintended consequences.”

“Do you think Silicon Valley is thinking that right now?” Todd asked him.

“They say that they do,” Nolan replied. “And that’s,” he chuckled, “that’s helpful. That at least it’s in the conversation. And I hope that thought process will continue. I’m not saying Oppenheimer’s story offers any easy answers to these questions. But at least it serves a cautionary tale.”

  • rm_dash_r_star@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    Not the first or last time a call went out to accountability. Unfortunately progress marches ahead with little consideration. The people that should be accountable are not required to be accountable or do they have any motivation to be accountable. Visionaries are typically driven by obsession with little consideration for human cost.

    I don’t think the development of nuclear weapons has an exact parallel to the development of AI or technology in general, but there are some analogies.

    What would have happened if all the the world’s scientists were able to halt the project by saying, “wait we’re not moving ahead until we can be sure of what the future looks like for a world with nuclear weapons.” Turns out the Axis wasn’t anywhere near a working nuclear bomb. The USSR had moles in the Manhattan project and stole the design verbatim. They would not have nuclear bombs either. WWII Americans would have landed on Japanese soil at the cost of a million American soldiers. The war would have drug on, but a win for America would have still happened. No nukes in the world yet, but eventually some country would have found a way to contract the science. We’d be in the same place now. The only difference is it would have happened later.

    So proponents of AI can claim there’s accountability, but for sure someone will develop technology regardless of it. Once it’s done by one, it’s done by all.

    • atomicorange@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Our technology for destroying each other has outpaced our ability to morally cope. We used to be able to depend on murder being a relatively face-to-face thing. For a soldier to kill you they had to get up close with a rifle or sword, at least close enough to watch you die. They need some personal motivation for that, and people get sick of it quickly.

      Now it’s abstracted to the push of a button, depersonalized so you can target a car, or a building, or a city center, not just a particular person. You don’t even have to watch.

      If we let AI start making those choices for us, we don’t even have to push the button. It all just happens in the background. No moral conflict needed. No appealing to each other’s humanity. No burden, no guilt. Just death.

      I like Roger Fisher’s proposal for adding humanity back into the nuclear weapon equation: implant the launch codes in a volunteer. Require the president to murder someone up close and personal before he can choose to murder thousands (or more) from a distance.

      And keep AI the FUCK away from war.

      • rm_dash_r_star@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        And keep AI the FUCK away from war.

        Like SkyNet, they’ll come to the conclusion it all has to go.