Unveiling the Victory: AI Doomers' Defeat in the Battle of Perception

Unveiling the Victory: AI Doomers' Defeat in the Battle of Perception

Unveiling the Victory: AI Doomers' Defeat in the Battle of Perception


The board of OpenAI has changed CEOs four times in the last five days. It initially charged Sam Altman, the first CEO, with lying, but then withdrew that accusation and wouldn't clarify what it meant. In an open letter, 90% of the organization's employees threatened to resign if the board didn't act. Silicon Valley was astounded as well as enthralled. By Wednesday, everyone could finally get some rest, Altman was back, and two of the three external board members had been replaced.

One could easily argue that this disarray demonstrated the inadequacy of OpenAI's board and its peculiarly divided non-profit and for-profit organization. Another argument would be that the outside board members had the necessary training or expertise to manage a $90 billion business that has been laying the groundwork for a potentially game-changing technological innovation. It's possible to say less courteous things as well, and all of that may be accurate, but it wouldn't be whole.  

To the best of our knowledge—and the fact that I must admit that is problematic in and of itself—the fundamental disagreement inside OpenAI has been the subject of several references and even jeers throughout the course of the previous year. The goal of OpenAI's creation is to achieve artificial general intelligence, or "AGI," or a machine equivalent of human intelligence. The idea was that this could happen in years as opposed to decades, and it could be both highly beneficial and very dangerous—not just for commonplace things like democracy or society, but also for mankind as a whole.

To control the risk, that is the rationale for the peculiar organizational structure. While creating this item as quickly as possible, Altman has been adamantly stating—always and loudly—that it is exceedingly dangerous, and that governments ought to step in to regulate any attempts to build it. Which one is it, then?

Speaking up about these issues is seen by many in the tech industry as a clear attempt at anti-competitive regulatory capture. This is especially true of larger campaigns against open-source AI models, as demonstrated by the executive order on AI issued by the White House last month. People believe that OpenAI is attempting to persuade governments to outlaw competition.

That may be so, but in my opinion, those who honestly believe that AGI is both near and hazardous have a conflicting motivation to develop it. That seems to be the best explanation for what happened at OpenAI: a coup against those who advocate caution and speed increases was organized by those who advocated for caution and slow down.

The fact that artificial general intelligence (AGI) is an abstract idea and a thought experiment without a well-defined theoretical framework contributes to some of the debate and difficulty surrounding it. We don't know how far away AGI is, how close its massive language models are to us, or if they can reach it, unlike the engineers on the Apollo Program who knew how far away the moon was and how much thrust the rocket had.

You may watch videos of machine learning specialists debating this for weeks on end and come to the conclusion that they are just as ignorant as you are. In five years, or in five decades, ChatGPT may reach the Terminator, or it may not. This may be likened to worrying that a 1920s airplane might enter orbit when you stare at it. We're not sure.

As a result, most discussions regarding the potential risks of AI devolve into hunts for analogies (e.g., it's "like" nuclear weapons, a meteorite, or the Apollo Program). Alternatively, they bring up long-forgotten philosophy courses from their undergraduate years (Pascal's wager, Plato's cave), or they use authority as a weapon (Geoff Hinton is concerned, but Yann LeCun is not!). Ultimately, this boils down to your innate attitude toward risk. Is it okay to worry or should you not worry if you are unable to determine what is close or not? There isn't a correct response.  

Regretfully, the previous week's events have accelerated everything for the "doomers." According to a board member who has since resigned, closing OpenAI would be in line with the organization's goals (better safe than sorry). However, the hundreds of businesses that were utilizing OpenAI's application programming interfaces are frantically searching for substitutes, both from its for-profit rivals and from the expanding number of uncontrolled open-source projects. AI will now be less centralized, more distributed, and move more quickly. Failures in coups frequently cause the same thing they were intended to stop to intensify.

In fact, the assumption that a few smart engineers and a potent piece of software can change the world is often criticized as being another example of naive and unsophisticated tech utopianism, failing to recognize the true nature of power, complexity, and human systems. That is precisely what the doomers on the board showed to be lacking in knowledge of power.
 

Share this Post

Comments

Leave a comment