Technology

How Peter Thiel’s relationship with Eliezer Yudkowsky initiated the AI ​​revolution

Two members of the news, Brian and Sabine Atkins, Internet entrepreneurs Brian and Sabine Atkins, met on a diplomatic mailing list in 1998 and got married shortly afterwards. Yudkowsky, 21, moved to Atlanta and began his nonprofit salary annually at about $20,000 to preach his message of kindness and wisdom. “I think something very smart will be good automatically,” he said. But within eight months, he began to realize that he was wrong – the mistake. He believes that AI could be a disaster.

“I’m taking Yudkowski explained: “My money is someone else’s money, and I’m a deep sense of obligation to help me. At some point, if superintelligence can’t automatically decide what is the right thing and what to do, that means there is no real right or wrong, then in which case, who will in which situation?’’ I was like, “Okay, but Brian Atkins may not want to be killed by superintelligence. ” He thought Atkins might want to make a “backup plan”, but when he sat down and tried to work hard, he realized with horror that it was impossible.

Atkinses understands that the institute’s mission ranges from making artificial intelligence to making friendly artificial intelligence. “The part we need to solve a friendly AI problem does put barriers to hiring AI researchers, but we certainly don’t have the funds to do that,” Yudkowski said. Instead, he designed a new intellectual framework that he called “rationalism.” (While on its face, rationalism is the belief that humankind has the power to use reason to come to correct answers, over time it came to describe a movement that, in the words of writer Ozy Brennan, includes “reductionism, materialism, moral non-realism, utilitarianism, anti-deatism and transhumanism.” Scott Alexander, Yudkowsky’s intellectual heir, jokes that the movement’s true distinguishing trait is the believe that “Eliezer Yudkowsky is the state of California.”)

Yudkowsky, in his 2004 Paper on Coherent “Corherent Exullaplate Volition,” Yudkowsky argues that friendly AI is not only developed based on what we think we want AI to do now, but is actually in our best interest. He wrote: “The goal of engineering is to ask what humans ‘want’, or whether we decide whether we know more, think faster, and more about the people we want ourselves, the people we grow up together, etc. In the paper, he also uses a memorable metaphor that originated from Bostrom, which AI can go wrong: If you program AI to generate paper clips, if you are not careful, it may end up filling the solar system with paper clips.

In 2005, Yudkowsky attended a private dinner at a San Francisco restaurant held by the Foresight Institute, a technology think tank established in the 1980s to promote nanotechnology. (Many of its original members came from the L5 society, the association was committed to establishing a space colony wandering behind the moon and successfully lobbying to prevent the United States from signing the 1979 UN Moon Agreement. Yudkowsky didn’t know who Thiel was and walked to him after dinner. “If your friends were a reliable signal about when assets were falling, then they needed to do some sort of perception of defeating efficient markets to reliably correlate with the decline in stocks, Yudkowsky actually reminded Thiel that this made all the risk factors impossible to put into practice. Thiel was fascinated.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button