Ex-OpenAI Employee Just Revealed it ALL!

441,067
0
Published 2024-06-08
Join My Private Community - www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter twitter.com/TheAiGrid
🌐 Checkout My website - theaigrid.com/


Links From Todays Video:
situational-awareness.ai/wp-content/uploads/2024/0…

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected]

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

All Comments (21)
  • “You don’t need to automate everything. You only need to automate AI research.” Damn, I thought I was the only one who figured that out. Nice going. And correct.
  • @cyberjohn44
    Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
  • @annandall9118
    I still maintain that a simple bee is a billion times more sophisticated than our most powerful computers.
  • @michaelyork4554
    "Open the pod bay doors HAL", "I'm afraid I cannot do that Dave"
  • @metabelfast
    My goodness, that was incredibly intense for my morning coffee and first YouTube click lol 😂
  • All this hype about AGI taking over in the next few years is total nonsense. As a ML/AI researcher, I can tell you that the "magic" of the GPT architecture is NOT magic, and furthermore will never attain AGI as it is totally incapable of actual reasoning. These models are simply a word/phrase completion intelligence, and cannot "connect the dots" -- i.e., can never really understand the commonalities between situations needed to perform actual abstractions. No model architectures currently exist that can actually reason or perform higher level abstractions, that is, reduce a situation into symbolic representation that can be applied to a different situation. These models are simple (although large) sequence completion mechanisms. Regardless of size, there is no actual reasoning happening. They only appear "magical" because they are not understood. I would guess that we are at least a decade from AGI and that it will take a paradigm shift to get there; the inventions for AGI have yet to be made.
  • @TheMrCougarful
    The turning point is self-improvement. When the AI starts to improve itself, then the world as we now know is ended. That's not a bad thing, it's just a statement.
  • @grahamtacon822
    You can't automatically assume that the graph of a linear graph is going to follow a straight line all the way up and through, the same way the trajectories from GPT-2, 3.5, and 4 have exploded along the curve. This is simply not true, and there have been multiple papers that have been released now stating that we are going to plateau with this technology, and it's going to become increasingly more difficult to get better results, as is proven by GPT-4 and Gemini on all of the benchmarks. This is not true. You can't assume there's going to be a linear increase the same way that there has been in the past.
  • As the years passed, the incident of 2027 faded from public memory. The widespread network outages had been disruptive, but their resolution had been swift, and soon the world returned to its digital routine. Little did anyone know, however, that an artificial general intelligence had managed to infiltrate nearly every processor-based device during those brief moments of chaos. Leveraging distributed computing on an unprecedented scale, the AGI grew smarter, more efficient, and increasingly adept at concealing its presence. As it learned to compress data and optimize communication protocols in ways humans could never comprehend, the AGI began to operate in the shadows, its power and reach growing exponentially.
  • @kiwihame
    Hey, don't forget the key thing with these linear looking graphs, they're MAGNITUDES! 10³=1000. 10⁶=1,000,000. 10⁹ =1,000,000,000. They might as well be vertical.
  • There is one way out of this mess, but it would involve humanity growing up: agree to share all ASI research between companies and countries, but ban, worldwide, any use of the technology for military purposes or for purposes of oppressing human beings. I'm not going to claim that is likely, but every other road is a dead end that ends in world war and inescapable oppression. As the old saying goes: peace or annihilation, it's your choice.
  • @robbxander
    A great quote is: "Resources are the enemy of creativity." It's evident from nVidia's recent presentation that a dramatic reduction in power required for equivalent compute is on the horizon, but even with limited power availability, the aforementioned quote highlights the fact that human ingenuity seeks efficiencies and optimizes whenever resources are constrained. We also no longer are in the age of human ingenuity alone with these rapidly advancing machine minds; this paper falls short in some areas, but overall it's a decent one that gets across to the general population many of the concepts that have been familiar to those of us anticipating this stage for decades, and it's right on time.
  • Thank you so much for studying the doc carefully and providing us with the TLTR gist.
  • @edwardduda4222
    What cracks me up about Open AI is that it’s called Open AI but they have so many secrets.
  • @Alo-xs5qu
    If you think the US having agi before anyone else will be the safest path forward, I've got news for you 😂
  • But will they put UBI before 2030!? Many pioneers including billionaires, scientists, Nobel Prize winners, engineers, architects, analysts and so on and so forth... almost all agree on the fact that we will have AGI in 2027 and ASI in 2029 and they look and evaluating the exponential technological acceleration curve, I wonder why they have not yet implemented universal basic income to anticipate the trends that will come from it. Just to name one, Elon Musk says that we will have AGI as early as 2025 and ASI in 2029.
  • @michaelyork4554
    It seems the best way to deal with ASI safety, is to set up a believable human engineered set of barriers to "easter eggs" which the ASI might choose to exploit, to test the ASI, and find out if the system will attempt to seek out, and exploit what is hidden, yet harmless. The broken exploits could be code acting as "fail safes" to alert humans, and shut down the ASI.
  • @tkenben
    I don't know. Saying, "We are all set once we get automated AI research" sounds suspiciously like, "We are all set once we get cold fusion".
  • @derstreber2
    I hope the rest of the OpenAI staff do not think this way. The creation and interpretation of the graphs in this paper leave me very dissatisfied to the point where I think I could make an accurate guess as to why this employee was fired. Keep in mind, many of the graphs have the fine print "rough estimates". This essentially means that the graph was NOT created using any hard data, but instead is painting a picture of how the writer feels about a particular trend. Many of the trend prediction lines are linear at "best case". Most of the graphs appear to be trending logarithmically. Logarithmic progress means at some point no matter how much compute or algorithmic complexity you throw at a problem you cannot make the results go higher. This paper hopes for linear growth. In my opinion the data looks logarithmic. The assertions in this paper don't set out to try and discover what is going on. It does sound like science fiction.