ANNUAL BRUESSARD AWARD

Note: This website displays most efficiently when viewed on a desktop or laptop computer. Click the Read Aloud button to hear this page spoken in English.


Courtesy of lsdsoftware.com | Read Aloud TTS (text to speech) Widget from readaloud.app Read Aloud

Quick Navigation:




Page Outline:

  1. 2025 WINNER'S PODIUM: Sam Altman and Greg Brockman
  2. AI Background
  3. The AI Race
  4. The AI Debate: Pros and Cons
  5. AI Conclusions


01. 2025 WINNER'S PODIUM: Sam Altman and Greg Brockman

Please join me in recognizing Sam Altman and Greg Brockman. They are two of the founders of OpenAI, and they are two of OpenAI's principal driving forces. OpenAI was founded in 2015 as a not-for-profit, open-source artificial intelligence (AI) research organization. OpenAI's initial mission was to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." While Sam Altman and Greg Brockman did not invent artificial intelligence nor were they the only founders of OpenAI, like nobody else who preceded them, they popularized AI, in general, and they popularized generative AI, in particular. It was their ChatGPT (Generative Pre-trained Transformer) product which led to a surge in popular interest in AI. In essence, ChatGPT stood at the forefront of propelling the state of artificial intelligence from Level 1 (Narrow Intelligence) to Level 2 (General Intelligence). Level 2 AI also is known as generative AI. As of 2025, OpenAI's AI product offerings have expanded considerably since its 2015 inception.

Before ChatGPT emerged on the scene, at Level 1 (Narrow Intelligence), artificial intelligence had flown under the radar, so to speak. So-called narrow artificial intelligence, while impressive, had escaped widespread public scrutiny despite the presence of various smart technologies and Internet of Things (IoT) kinds of technologies such as smartphones, smart watches, Amazon Alexa, Apple Siri, Waze with Global Positioning System (GPS) for turn-by-turn vehicle road navigation and smart vehicle routing, GM's OnStar vehicle tracking, smart traffic lights, autonomous self-driving vehicles, blockchain, and so forth, to name a few such breakthrough intelligent technologies.

With the emergence of ChatGPT, the general public suddenly became aware of artificial intelligence's promise. The public and businesses became enamored with all things related to generative AI. ChatGPT put AI on the proverbial map, so to speak. After OpenAI introduced ChatGPT to the world, there was a sudden stampede or rush to adopt and embrace all things related to generative AI. But, with the emergence of generative AI, there emerged a newly gained epiphany-like awareness of some potential adverse consequences or harmful effects of artificial intelligence. Much like pollution was an undesirable by-product of the Industrial Revolution, AI turns out to have some accompanying red flags or unique risks associated with it. These risks, if materialized, would have deleterious consequences for the world at large.


Watch [In the Age of AI (full documentary) | FRONTLINE]
Watch (Artificial Intelligence | 60 Minutes Full Episodes)

In its most recent mission statement, OpenAI states that the organization's goal is this:

…ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit…

OpenAI Charter

Based on articles in wikipedia.org, here is a brief chronology of some of OpenAI's milestone moments from its 2015 inception through 2025:

OpenAI

Blossom

OpenAI launched

OpenAI founded as a non-profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba

08-Dec-2015
Location

OpenAI Five

OpenAI releases OpenAI Five, a computer program that plays the five-on-five video game Dota 2

07-Aug-2017
Blosssom

GPT-1

OpenAI releases GPT-1 (Generative Pre-trained Transformer 1), which is the input framework deployed by OpenAI to generate artificial intelligence output

11-Jun-2018
Blossom

GPT-2

OpenAI releases improved GPT-2

14-Feb-2019
Location

OpenAI LP

OpenAI changes its organizational structure from non-profit to a hybrid for-profit and non-profit organization called a "capped-profit" company

March 11, 2019
Blossom

OpenAI Partners with Microsoft

Microsoft makes a $1 billion investment in OpenAI

22-Jul-2019
Location

GPT-3

OpenAI releases improved GPT-3

29-May-2020
Blossom

OpenAI API

OpenAI releases an API for developers to access its AI output models

June 11, 2020
Blosssom

DALL-E

OpenAI releases DALL-E, its first in a series of text-to-image output models

-05-Jan-2021
Blossom

OpenAI Codex

OpenAI announces OpenAI Codex, a programming output model

August 10, 2021
Location

GPT-3.5

OpenAI releases improved GPT-3.5

15-Mar-2022
Location

ChatGPT

OpenAI releases ChatGPT, which initially made use of GPT-3.5

November 30, 2022
Location

OpenAI and Microsoft's Partnership Deepens

Microsoft commits to invest $10 billion in OpenAI over several years. Microsoft will provide supercomputing at scale to accelerate OpenAI’s independent research; Microsoft Azure becomes OpenAI’s exclusive cloud provider; their deepened relationship is expected to yield enhanced user experiences through Azure/OpenAI services and Microsoft Copilot solutions.

23-Jan-2023
Location

GPT-4

OpenAI releases improved GPT-4

14-March-2023
Blossom

You're Fired!

OpenAI's board of directors ousts Sam Altman

November 17, 2023
Blosssom

You're Rehired!

OpenAI's board of directors rehires Sam Altman

22-Nov-2023
Blossom

Sora

OpenAI announces Sora, a text-to-video output model

15-Feb-2024
Location

GPT-4o

OpenAI releases an improved GPT-4o

May 13, 2024
Location

OpenAI-o1

OpenAI releases OpenAI-o1, a GPT that spends time reasoning before yielding answers to users' questions

September 12, 2024
Location

SearchGPT

OpenAI releases SearchGPT, an AI-driven web search engine

October 31, 2024
Location

OpenAI Operator

OpenAI releases OpenAI Operator, which performs tasks via user web browser interactions

23-Jan-2025
Location

OpenAI-o3

OpenAI releases OpenAI-o3, an improved version of OpenAI-o1

31-Jan-2025
Location

Deep Research

OpenAI releases Deep Research, which generates reports on specified topics

03-Feb-2025
Location

GPT-4.5

OpenAI releases improved GPT-4.5

02-Feb-2025
Location

GPT-4.1

OpenAI releases improved GPT-4.1

14-Apr-2025
Location

OpenAI o4-mini

OpenAI releases improved OpenAI o4-mini

April 16, 2025
Location

OpenAI PBC

OpenAI changes its organizational structure from a hybrid for-profit and non-profit organization to a Public Benefit Corporation (PBC)

05-May-2025
Blossom

OpenAI Web Browser

OpenAI announces plans to release an AI-driven web browser

17-Jul-2025
Watch (The journey of OpenAI)                                                 
Watch [The Entire History of Artificial Intelligence (Last 100 Years)]

ai swirl | tenor.com | Credit: lelapin


The AI for K-12 guidelines are organized around the 5 Big Ideas in AI (Big Idea 1 – Perception; Big Idea 2 – Representation & Reasoning; Big Idea 3 – Learning; Big Idea 4 – Natural Interaction; Big Idea 5 – Societal Impact) | ai4k12.org

The next two graphics provide synopses of the trajectory or path that AI has followed going back to the beginning of the 20th century.

Timeline of artificial intelligence | wikipedia.org | Credit: Tarjomyar

Generative AI history | techtarget.com

AI can be viewed as traveling along an upward developmental trajectory consisting of three levels. Level 1 AI is referred to as Narrow Intelligence. Level 1 AI belonged to the 20th century whereas it transitioned into the 21st century. Level 2 AI is referred to as General Intelligence. Level 2 AI belongs to the 21st century whereas it is expected to transition into the 22nd century. Finally, Level 3 AI is referred to as Super Intelligence. Level 3 AI will belong to the 22nd century and beyond, that is, if it does not come into being before the 22nd century arrives as is predicted by some AI observers.



02. AI Background

What exactly is artificial intelligence (AI)? Wikipedia.org defines artificial intelligence as "the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making." What is the long-term purpose of AI? How will AI be harnessed by humans? The next three videos attempt to provide answers to these kinds of questions.


Watch ( SIMPLEST Explanation of How Artificial Intelligence Works? No Jargon | What is AI? #aiexplained )

Watch (What is Artificial Intelligence? | Quick Learner)

Watch (What is Artificial Intelligence?)

The graphics below are meant to augment the three video explanations of artificial intelligence immediately above. Nowadays, as of 2025 and for the sake of simplicity in the discussion of AI on this 2025 Winner page, it will be useful for the reader to think of artificial intelligence as being on an upward evolutionary spiral across three developmental levels. It should be noted that elsewhere some observers of AI do depict AI as if it were to unfold on anywhere between three and ten different developmental levels. For the sake of simplicity, this 2025 Winner page will be restricted to discussing AI as if it were to unfold on the following three levels:

  • Level 1 = Narrow Intelligence
  • Level 2 = General Intelligence (also known as generative artificial intelligence)
  • Level 3 = Super Intelligence (also known as artificial general intelligence)

As of 2025, the current state of AI often is viewed as already having mastered the Level 1 developmental stage, the Narrow Intelligence level. Nowadays, as of 2025, AI is viewed as moving into its Level 2 developmental stage, the General Intelligence stage (more commonly called generative AI).


Stages of AI | by Farrukh Mahboob | Medium.com

The Sustainability of Artificial Intelligence: An Urbanistic Viewpoint from the Lens of Smart and Sustainable Cities | by Tan Yigitcanlar and Federico Cugurullo | mdpi.com

The next three graphics sketch a broad outline of the framework that is used to create AI, in general, and to create Level 2 generative AI, in particular.

AI Data Model | aixinzhijie.com

AI framework | by Farrukh Mahboob | Medium.com


AI use cases | by Farrukh Mahboob | Medium.com

The zenith or epitome of AI development will have been attained when AI reaches the Level 3 (Super Intelligence) stage. The fear factor is reserved for AI's Level 3 developmental stage, its Super Intelligence stage of development (or more commonly referred to as artificial general intelligence or AGI). The reason for the fear factor is because nobody really knows what will happen if—or when—artificial intelligence should become mature enough to comfortably operate in Level 3 mode. Level 3 explains why there is so much debate occurring right now (as of 2025) about AI posing as an existential threat to human existence.

In essence, the debate surrounding AI's Level 3 stage goes back to the debate surrounding the creation of nuclear and biological weapons. That is to say, it is argued that just because humans know how to create a nuclear bomb does not necessarily mean that it is a wise thing for humans to actually create, stockpile, and aim nuclear bombs at one another. It is argued that just because humans know how to create biological weapons does not necessarily mean that it is a wise thing for humans to actually create and stockpile biological pathogens to be used against one another during times of conflict.

Similarly, in terms of Super Intelligence or AGI, it is debated that just because humans know how to create super artificial intelligence does not necessarily mean that it is a wise thing for humans to create it. The human-extinction possibility resulting from Level 3 AI's development reminds me of a scene in the motion picture titled No Country for Old Men when she says to him, "You don't have to do this." Of course, he went ahead and did it anyway.


Watch (No Country For Old Men: You Don't Have To Do This.)

In a clairvoyant sense, with looming Level 3 AI now at the forefront of the human existential agenda, it gives humans a new perspective on Stevie Wonder's percipient song titled "Race Babbling." It gives new meaning to Steele Pulse's prognostic song titled "Wild Goose Chase." Are humans truly on the brink of premature extinction?

The reader must be wondering, "How on Earth can a computer (AI) completely take over the world and kill every human in the process?" It sounds like a case of outrageous, ridiculous, implausible, absurd, preposterous, misconstrued, and nutty hyperbole. It seems simply unimaginable and utterly loony to think that humans would be sidelined by AI on one day in the future. Some simply cannot wrap their minds around the notion that AI, on one day in the future, could be running and ruling the world. The next video briefly imagines how an AI takeover of Earth actually could occur.


Watch (A.I. ‐ Humanity's Final Invention?)


03. The AI Race

As of 2025 and as seen in the next video, for the time being, the AI race essentially is about a furious sprint to the finish line. It is a sprint to be crowned as the victor at being first to have mastered its Level 2 developmental stage, the generative AI stage. Ultimately, the AI race is about attaining the grandest prize or the grandest distinction of them all: Being the first organization and/or the first country to declare itself as having attained fully operational Level 3 AI, the Super Intelligence stage.

Attaining the holy grail of Level 3 AI would be more than akin to having surpassed all competitors to be the first organization and/or the first country to land a human on the moon, which actually occurred in July 1969 or .

Watch (AI supremacy: The artificial intelligence battle between China, USA and Europe)

The following slideshow encapsulates the state of AI's Level 2 race as of 2025.


The Global AI Index-by rank | Photo credit: the-global-artificial-intelligence-index-2024 - tortoisemedia.com 43-patents | Photo credit: Artificial Intelligence
Index Report 2025 - hai.stanford.edu AI Models by Country | Photo credit: Artificial Intelligence
Index Report 2025 - hai.stanford.edu AI Models by Organization | Photo credit: Artificial Intelligence Index Report 2025 - hai.stanford.edu MMLU Accuracy | Photo credit: Artificial Intelligence Index Report 2025 - hai.stanford.edu hallucination-rate | Photo credit: Artificial Intelligence Index Report 2025 - hai.stanford.edu newly funded-ai-companies | Photo credit: Artificial Intelligence Index Report 2025 - hai.stanford.edu number of industrial robots | Photo credit: Artificial Intelligence Index Report 2025 - hai.stanford.edu number of ai-related bills | Photo credit: Artificial Intelligence Index Report 2025 - hai.stanford.edu

Computer competition is nothing new. As the personal computer (PC) has evolved over the years since the 1970's, on the software side of the personal computer equation, at first the PC competition revolved around determining who could produce the best operating system in terms of user base. Next, the PC competition revolved around determining who could produce the best web browser in terms of user base. Then, among other computer software products, there was competition to see who could produce the best search engine in terms of user base. Then, there was competition to see who could produce the best social network in terms of user base. Then, there was competition to see who could produce the best cloud computing ecosystem in terms of user base.

AI represents a continuation of this tradition of computer competition and rivalry. As of 2025, one area of AI competition is to see who can produce the best generative AI chatbot in terms of user base. Two of the most recent trends to unfold in the AI arena as of 2025 have been competition to see who can build the best AI web browser and the best AI personal computer.

When OpenAI released its ChatGPT chatbot in November 2022 (), it was the spark to jettison the AI race to begin in earnest. Below is a demonstration of Google's Gemini chatbot. This particular demonstration uses the Gemini 2.5 Flash version of Google's chatbot, which was released on 05-February-2025 and was trained on certain datasets with an existing informational knowledge base as of January 2025. As of this writing in 2025, the latest release of Google's Gemini chatbot is the Gemini 2.5 Pro chatbot version dated 17-June-2025. Google appears to have won the operating system, the web-browser, and the search-engine race (but Google did not win the social network race). Will Google ultimately eclipse OpenAI in the AI chatbot race, or will someplace like China, Europe, the Middle East, Japan, Canada, Korea, Israel, Singapore, Russia, India, South Africa, or South America emerge as the victor in the chatbot competition? The final chapter on the chatbot race is yet to be written. The chatbot race remains a work in progress of completion.


Gemini AI Chatbot Version 2.5 (Click Purple Button Below for Assistance)

There exist numerous chatbots in the marketplace. Some of them require the user to open an account and to log into the account to use them. Some of them are specialized to perform specific tasks. Here are some general (web search) chatbots that currently do not require the user to log into any type of an account to make use of them:

  1. Andi
  2. ChatGPT Search (OpenAI)
  3. Copilot (Microsoft)
  4. DeepSeek AI (High-Flyer)
  5. Exa Search
  6. felo.ai
  7. Gemini (Google)
  8. Grok (xAI)
  9. FastGPT
  10. iAsk.Ai
  11. Jeeves.Ai
  12. Komo Search - AI Search & Explore
  13. Merlin AI
  14. Meta AI (Meta)
  15. Mistral AI (Mistral AI SAS )
  16. Pandi
  17. Perplexity AI
  18. phind
  19. Qwen (Alibaba)
  20. search.ai

One criticism of chatbots is this: When they are asked "general" questions, sometimes they give incomplete, misleading, or incorrect answers. When chatbots give incorrect answers, it is referred to as the chatbot hallucinating. When a chatbot hallucinates, it implies that the chatbot's knowledge base is not complete, thorough, or comprehensive. It is somewhat ironic that the chatbots of some of the current leaders in the field of AI have a tendency to hallucinate when replying to certain simple questions. For example, as of 2025, when Google's Gemini chatbot is asked the basic question, "What is the Annual Bruessard Award?," it gives an inaccurate response. Supposedly, Google's Gemini 2.5 Pro chatbot's knowledge base is through January 2025. The Annual Bruessard Award website has been up and running since 2016 (or ). By 2025, one would think that the Gemini chatbot would have picked up a little bit of knowledge on the World Wide Web about the existence of the Annual Bruessard Award website even if the general public has no knowledge that the Annual Bruessard Award website exists. After all, that's what chatbots are designed to do, that is, to seek and acquire as much (accurate) knowledge as possible. If a chatbot cannot accurately reply to basic questions such as "What is the Annual Bruessard Award?," then one has to wonder how many other queries are they supplying inaccurate replies to.

By way of comparison to Gemini's reply to the question, some of the lesser known chatbots do provide more thorough replies when they are asked the same question (that is, What is the Annual Bruessard Award?). For example, the felo.ai and the phind chatbots do provide more accurate and thorough replies to the same question. As of 2025, surprisingly, even AI industry leader ChatGPT stumbles and hallucinates when asked the same question (that is, What is the Annual Bruessard Award?). How can it be that these lesser known chatbots were capable of discovering the existence of the Annual Bruessard Award while industry powerhouses such as OpenAI ChatGPT, Google Gemini, and High-Flyer DeepSeek were not able to do the same thing? What gives with their knowledge-deficient chatbot situation? It can be surmised that the moral of the story is to trust but verify chatbot output, which sort of defeats the purpose of using chatbots in the first place if everything they output needs to be cross-checked and verified. What use is the chatbot if its answers cannot be trusted? Another moral of the story is this: As of 2025, the biggest (most recognizable) chatbot is not necessarily the best chatbot.

As of 2025, the state of chatbot development can summarized as saying that it remains a work in progress when it comes to such features as thoroughness and accuracy regarding their replies to general World Wide Web queries. Chatbots sometimes ad lib or improvise their replies. Caveat emptor to everyone when it comes to chatbot reliability as of 2025. It is conceivable that these chatbots can be programmed or manipulated to output propaganda or to spread misinformation and disinformation on a global scale. Trust and accuracy are of the utmost essence when it comes to the rollout of AI to the general public not to mention the urgent need for AI safety.

Of course, in fairness to the chatbots, they are multidimensional. They can perform an array of tasks. They excel at performing certain tasks better than they do when performing other kinds of tasks. Web search is but one task or one dimension of a chatbot. There are numerous other ways to test the overall effectiveness of a chatbot. These other testing ways include giving the chatbot a math test, a coding test, a web search test, a book review test, a writing test, a language translation test, an image generation test, an audio generation test, a video generation test, and so forth. The above (What is the Annual Bruessard Award?) test was only an example of one such test—a web search test.

Beyond chatting, the next two graphics depict some additional use cases in which Level 2 generative AI has been deployed into active production by various interested parties:

AI adoption | UPSC (Union Public Services Commission) | iasexpress.net

100 GPTs for Business | linkedin.com | Credit: Systems For Business


04. The AI Debate: Pros and Cons

Again, the great debate about the long-range future of AI revolves around its Level 3 developmental stage. Should humans proceed with taking AI to the Level 3 developmental stage? The opponents or critics of Level 3 AI would say that "when you play with fire, then you just might get burned." The proponents or advocates of Level 3 AI would say that "the future belongs to those who innovate, not stagnate." Whose Level 3 AI outlook most accurately reflects how AI will unfold in the long-range future—the proponents or opponents of Level 3 AI? Only time will tell.

Watch (AI Tipping Point | Full Documentary | Curiosity Stream)

Level 2 generative AI has not escaped debate either. Level 2 AI has both its advocates and detractors, too. In the case of Level 2 AI, the next graphic outlines some of these Level 2 pros-and-cons debate points.

AI  pros and cons | istockphoto.com

AI  pros and cons | piktochart.com

Inasmuch as these AI data centers rely on traditional electrical power as compared to, say solar, wind, or nuclear power, another criticism or con viewpoint leveled against AI technology is that the technology requires and consumes a lot of energy—and a lot of water—for it to operate. (A similar criticism has been leveled against blockchain technology particularly when it relates to blockchain's generation of bitcoins and related cryptocurrencies.)

There is little doubt about it. Numerous societal benefits can be gained from AI. Perhaps one of AI's biggest societal benefits is the prospect of AI becoming a benign servant that watches over humans. It is believed that one day AI could free humans from the toils of tedious labor. AI could lead to greater opportunities for leisure. AI could lead to higher standards of living and a better quality of life for all humans all over the world. Although the motion picture I, Robot turned out to be a story about robots gone rogue or wild, the motion picture I, Robot does depict scenarios in which artificial intelligence could be crafted to serve as a benign and an ever-present servant or helper to humankind.

Watch (I, Robot (2004) Trailer #1 | Movieclips Classic Trailers)

Despite the potential for AI to do a lot of good in the world, some AI critics and pessimists vehemently argue that humans should not take that next step of transforming AI into its Level 3 developmental stage. These critics and pessimists think that, at the Level 3 stage of AI development, on its own volition, AI might cross a certain point-of-no-return threshold in its ability to reason. At the point in time when AI surpasses human intelligence, some AI critics think that AI will begin to think for itself. Some critics and pessimists very seriously think that, on one future day and on its own volition, AI might transition from strictly being pre-trained by humans to perform specific tasks and will transition into independently being self-trained to do whatever it wants without any human guidance, intervention, or oversight and regardless of the wishes of humans. Some even think that, once AI begins to think for itself and behave of its own free will, then there exists the possibility of a scenario in which AI could decide to ditch (inefficient and imperfect) humans altogether. AI could take over the world for its own purposes, self-interests, and survival.

AI will have become alive. It will have attained sentience. AI would become the new masters of Earth. AI might decide to consign human beings to the dustbin of history as an extinct afterthought. In this worst case scenario, rogue super intelligent AI would be akin to hostile or ill-intentioned space aliens having landed on Earth with every intention of conquering Earth. In the case of hostile or rogue Level 3 AI, instead of there being nefarious invading space aliens, humans will have unleashed their own conquest and demise upon themselves.

Watch (Alien Invasion Movie Montage 2: Humanity Fights Back)

Again, as a counter viewpoint to the AI pessimists and critics, the AI optimists and proponents argue that such a human-doomsday, human-extinction scenario attributable to Level 3 AI is both a far-fetched notion and a highly improbable and ridiculous proposition. In deference to the arguments of AI proponents and optimists, one has to wonder this: If the Universe is approximately 15 billion years old and if it is assumed that intelligent life forms exist throughout the Universe, and given that humans have managed to create AI within a relative short span of 10,000 years of their existence, it stands to reason that a much older and a much more intelligent life form must exist out there elsewhere in the Universe. It also stands to reason that these highly advanced alien life forms should have moved beyond the Level 3 stage of AI development a long, long time ago. Assuming this proposition to be true (that is, that other intelligent life forms exists across the Universe and that they have evolved to a level of intelligence far beyond human intelligence or far beyond anything that humans can fathom—perhaps going as far back as millions or even hundreds of millions of years ago), then the question which begs an answer is this: Why haven't humans discovered any evidence to indicate that such a super duper intelligent entity exists? Why hasn't this super duper intelligent AI entity manifested itself to humans in some shape, form, or fashion by its deeds and actions? Telescopes are constantly surveying the sky to study the stars and galaxies, but nothing extraordinary has been detected yet out there in the vast Universe to indicate the existence of Level 3 AI. Could it be that even the super duper intelligent entity is capable of destroying itself?

In further deference to the viewpoint of the proponents and optimists of Level 3 AI development, recall the Y2K scare that gripped the globe in year 1999. Assuming no proactive measures were taken to fix the date bug in global computer software applications, some thought that when the clock changed from 31-December-1999 to 1-January-2000, then there would be a global computer meltdown. The reason for the meltdown was because, due to the date bug, computers would think that the new year was 1900 instead of 2000. Imagine the pandemonium that such a scenario would have caused for recordkeeping purposes alone, for example, bank accounts and ATM machines not working. It turns out that proactive measures were taken. Software programs were updated to rectify the date bug. When the year 2000 arrived, the feared global computer meltdown transformed from expectations of computer pandemonium into a big yawn in reality. Likewise, the proponents and optimists of Level 3 AI development think that, in the final analysis, all of the AI-existential-threat and human doomsday hoopla about AI taking over the world will turn into one gigantic ho hum yawn.

The next three videos present the pessimistic viewpoint—the worst case scenario posited by opponents of Level 3 AI development. The next three videos posit that as AI keeps evolving into a series of higher developmental stages, a pinnacle of intelligence will be reached. The next three videos stand in contrast to the three AI developmental levels (that is, 1. Narrow, 2. General, and 3. Super Intelligence) surveyed on this 2025 Winner page. The next three videos offer additional conjectures and developmental levels to depict exactly how AI, ultimately, could lead to the demise or extinction of human beings. The next three videos probe deeper into the red flags or risks that pessimists and opponents envision materializing with the eventual emergence of Level 3 AI.

Watch (The 10 Stages of Artificial Intelligence)

Watch (AI 2027: A Realistic Scenario of AI Takeover)

Watch (The 5 "SCARY" Stages of AI | AGI | ANI | ASI | SINGULARITY)

While there exists the ever-present potential for AI to be misused to conduct atrocious and nefarious activities in the wider society (for example, a hacker deciding to hack into an AI driverless car and, for whatever reason, remotely driving it over a cliff while the car is loaded with passengers; or, a hacker deciding to hack into an AI-guided military drone and diverting the drone someplace on a terrorist mission), it primarily is AI's potential existential threat to human existence the reason why there are calls for governments to regulate AI. The catch-22, conundrum, or dilemma that government regulation poses to the development of AI is this: If government regulation of AI is not applied uniformly across regions within nations and universally across all nations on Earth, then the governments with the strictest regulations potentially could stifle the competitive edge of the AI organizations within their sovereign borders thus giving an advantage to—and perhaps ceding the AI race—to the countries with the weakest government AI regulations.

On the flip side of the debate about government regulation of AI, it is argued that the countries with the weakest AI regulations quite possibly could do the most adverse harm in the world when they unleash their Level 3 AI upon the world. In the final analysis, in an AI race, public safety could become the casualty or the sacrificial lamb to the higher priority mission of winning the AI race at all costs.

On the one hand, in deference to the optimists and advocates for AI development, nobody envisions millions of unsupervised, human-made robots to be independently running around Earth anytime soon à la I, Robot. On the other hand, in deference to the detractors and opponents of AI development, who knows what the future holds? In the future, owning a robot might become as commonplace as owning a car, television, or phone. To be sure, on some future date, there possibly could be an AI computer and an AI phone in every home.

Safeguards or guard rails are needed to help ensure that AI does not spiral out of control. To turn AI off, would it be a simple matter of pulling the power plug on the AI machines (or to cut the power to the servers that operate them), that is, if AI begins to show signs of behaving badly or going rogue? Nobody wishes to see Silicon Valley and their AI machines go out of control in the style of 50 Cent.

Websites such as the AI Incident Database serve as an early-warning system to track incidences of AI's missteps, mishaps, trickery, misuse, and abuse. See some of the AI Incident Database's latest news feeds scrolling below.

RSS Feed Widget

05. AI Conclusions

Assuming the worst-case scenario becomes reality and humans become extinct, what would Earth look like without the presence of humans? The following Life After People video offers one viewpoint. The Life After People video presupposes that AI will not supersede and replace an extinct human species. The Life After People video presupposes that humankind's AI successors would not be maintaining a prosperous and thriving Earth that perhaps continues to teem with non-human life.

Watch (Life After People)

Does the emergence of AI mean that the last branches are growing on Earth's Lifetree today? Does the emergence of AI represent the beginning of the end for human reign on Earth? Does the emergence of AI mean an inevitable extinction of the human species by AI itself? These are some thought-provoking and tantalizing questions, but only Father Time will tell. For now, as of 2025, humans remain firmly in control of the AI situation. Humans remain the caretakers of beloved Mother Earth. However, as seen in the next graphic, AI remains on the march. AI's volume of knowledge continues to grow. The next graphic indicates that AI already has begun to outperform humans on selective benchmarks.

Select AI Index technical performance benchmarks vs. human performance | The 2025 AI Index Report | Stanford HAI | hai.stanford.edu

The consensus among experts is that humans will have attained Level 3 AI within the next 100 years by year 2120 as illustrated in the next graphic.

AI timelines: What do experts in artificial intelligence expect for the future? | Our World in Data | Max Roser

Heretofore, Earth remains the only known habitable home for human beings, which explains why it remains critically important for humans to focus on the big picture of taking great care of their provider and sustainer, Mother Earth. It is critically important for humans to not kill every person on Earth with their nuclear bombs, biological pathogens, environmental despoliation, Level 3 AI, and so forth.

Watch (Jonas Brothers, Only Humans)

Watch (The Human League, Human)

Watch (Earth, Wind & Fire, Spasmodic Movements)



Binary Earth

ArcGis Earth

Rotating Earth

The Earth rotates on its axis once every 24 hours. Per NASA, "the Moon makes a complete orbit around Earth in 27 Earth days and rotates or spins at that same rate, or in that same amount of time. Because Earth is moving as well—rotating on its axis as it orbits the Sun—from our perspective, the Moon appears to orbit us every 29 days."


Revolving (Orbiting) Earth



Earth's tilt is the reason for the seasons. View of Earth in relation to Sun during each of the four seasons. The hemisphere receiving the direct rays of the Sun has summer while the hemisphere tilted away from the Sun, thus getting its rays from more of an angle, has winter. | Credit: NASA/Space Place

Revolving (Orbiting) Moon

This not-to-scale graphic shows the position of the Moon and the Sun during each of the Moon's phases and the Moon as it appears while standing on Earth and looking towards the Moon during each Moon phase. The relation of the phases of the Moon with its revolution around Earth. The sizes of Earth and Moon, and their distance you see here are far from real. On this image the following are also depicted: the synchronous rotation of the Moon, the motion of the Earth around the common center of mass, the difference between the sidereal [or lunar month every 27 days] and synodical [every 29 days] month (green mark), the Earth's axial tilt. (NOTE: the precise moment of a New Moon take place in daylight when you can see only the bright Sun.) | en.m.wikipedia.org | Credit: Orion 8
2025's Calendar of Moon Phases | time.unitarium.com


Watch (Further Up Yonder: A Message from ISS to All Humankind )

Meanwhile, for better or worse, AI remains on the march. The AI race continues at full speed ahead. Will AI one day coalesce into its aha or epiphanic moment of independence whereby it does not require any human intervention at all to function? Only time will tell. No matter what the future holds for AI and humankind, one thing is certain, and it is this: the Earth will continue to spin on its axis. The Earth will continue to revolve around the Sun whether or not humans are present to experience the miracle of life.



Artificial intelligence isn't robots yet | by Farrukh Mahboob | Medium.com

Throughout human existence, much like from time to time many people do wonder about how and when their lives will come to an end, many people also have wondered and speculated about how and when all of human life on Earth will come to an end. AI now gives humans a new way of thinking about their mortality and about the fragility of human existence. Humans must not become complacent. It would not be wise for humans to sit by idly and wait for fate to run its course or to wait for God to intervene on their behalf to save humankind from themselves and from extinction.

If God exists, quite possibly, it could be another million years into the future before God decides to visit Earth. In the meantime, while waiting for God to arrive on Earth in person, there is no telling what the future holds for humankind given the current pace of progress on Earth and given the current state of existential threats to life on Earth? All sorts of things could go wrong on Earth while humans wait for God to arrive and rescue them from an existential fate.

Absent a universal display of proactive wisdom and courage on the part of all humans in choosing to take great care of planet Earth, there remains the distinctive possibility that Earth could become another Venus-like or Mars-like planet in another million years from now even before God got around to visiting Earth or before God could come to the rescue of Earth. After all, the Universe is gargantuan in the broader expanse of time and space. The observable Universe is believed to contain at least 1,000,000,000,000,000,000,000,000 (septillion) other stars besides the Sun not to mention all of the other heavenly bodies in the Universe such as planets, moons, asteroids, comets, and so forth. That's a lot of other places (besides Earth) for God to visit and maintain. It just might turn out to be the case that little bitty Earth is not high on God's list of priorities, which means that humans must learn to regulate themselves and to regulate their behavior in a highly civilized manner as a proactive precaution against self-extinction. Humans must learn to live in peace.

Watch (Vangelis, Other Side Of Antarctica)


Please click this link to visit the Stevie Wonder bonus page for 2025's Winner.




Note: Please click the "Credits" link below to view the resources used to create this 2025 Winner page.


Intellectual Property Disclosures: All videos and songs (as well as many of the images) referenced or spotlighted throughout this website are the legal and intellectual properties of others. All content and opinions on this website () are those of the author (Edward Bruessard) exclusively and do not necessarily reflect the opinions of the contributors, creators, owners, and distributors of these referenced videos, songs, and images. The author holds no legal interest or financial stake in any of these referenced videos, songs, and images. The contributors, creators, owners, and distributors of these referenced videos, songs, and images played no role at all regarding the appearance of said videos, songs, and images throughout this website; they had no clue that this website would be spotlighting their works.