OpenAI’s GPT-4o: The Best AI Is Now Free!

220,634
0
Published 2024-05-14
❤️ Check out Microsoft Azure AI and try it out for free:
azure.microsoft.com/en-us/solutions/ai

Official link: openai.com/index/spring-update/
Try it out - if you don't see it on a free account, they may roll this out to you in the next few weeks: chatgpt.com/
Singing: openai.com/index/hello-gpt-4o/
(look for the "Two GPT-4os interacting and singing.")

📝 My paper on simulations that look almost like reality is available for free here:
rdcu.be/cWPfD

Or this is the orig. Nature Physics link with clickable citations:
www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu/
Károly Zsolnai-Fehér's research works: cg.tuwien.ac.at/~zsolnai/
Twitter: twitter.com/twominutepapers

#GPT4o

All Comments (21)
  • @TwoMinutePapers
    Her. Also: if you don't see it on a free account, they may roll this out to you in the next few weeks.
  • @napalmpig3772
    The aid for blind people part is seriously life changing for people. Eventually it’ll be integrated into glasses or something so you don’t have to hold your phone up.
  • @hotlineoperator
    Free for 3 hours per day and if use image or file prompt - then limit is less. Its a teaser, but good one.
  • @brickie9816
    Last steps to build a HAL 9000 have been just completed. It can even sing now.
  • @halko1
    I’ve been using the text prompt version of the GPT4-o in programming context and I’m pretty impressed of the improvements from previous version. A complex data parsing scenario which was nearly impossible to get working before was a breeze with the -o. I love it. 🥰
  • @aguvener
    One more step to help the visually impaired. What a time to be alive!
  • @pirincri
    They really ought to be forced into changing their name. "OpenAI" gives off a misleading impression to the general public regarding open source software.
  • @JacketAndTie
    I don't see an option to switch to 4o... it asks me to upgrade to 4 with the plus plan only
  • @DanielSeacrest
    Any-any multimodal models are something ive been waiting for! The ability to translate between text, image and audio is a really cool idea and I can't wait to get access to all the new multimodal features
  • The only thing I don’t like about it is that the AI voice sounds so forced. Like that kind of talk show pleasant sounding tone. It just sounds so unnaturally happy and that rubs me the wrong way. That does mean the AI voice is getting so good that it activates the uncanny valley in me! Closer and closer every day, what a time to be alive!
  • @Arkryal
    This may sound trivial considering all it's potential, but I've been having fun letting it identify tree species. It's crazy good at that. I literally used a 200x200 pixel blurry image from Google Street view of my house to identify a Linden tree from a fair distance. Now of course I know what tree it is, I planted it. I know a lot about trees, and even I could not have identified it from an image of that low quality if I didn't already know what it was. You couldn't make out leaf shape, bark type or anything, just kind of a green blur, lol. But holy crap... It works on ariel images too (though requires a higher quality than Google Maps). Should be interesting for things like foraging. I've tried this previously, and I can say the results are much, much better in this version. I also asked it to identify the best fishing locations given a general map of a local creek. I've fished the creek before, I know the best spots. It identified them fairly well. It knew the best spots, and I asked it to highlight them on the map, which it kinda succeeded / kinda failed. It generated the outline overlay in Python and the overlay would have been in the correct location, but it didn't actually generate the requested image. The code was correct though. The pieces are there, just needs a little more polishing on the output. But what's impressive is the logic it used. It could see color in the water to estimate the depth in various parts, it located a bend in the creek where the water flow would be slower, just upstream of a weir, thus more attractive to fish, and an area with cover for the fish, and even considered land accessibility since a creek that size would likely be fished from shore. It was able to analyze the image and use knowledge of freshwater fish habitat and fishing practices and pin down the ideal location. I tested that because I seriously doubt anyone at OpenAI has considered that use case. But the results were absolutely correct. Imagine a lengthier custom prompt and uploading some local fishing guides and actually telling it the region, time of year or fish species, the local fishing regulations, etc. Fishing guides, you are on notice, you could be obsolete by this afternoon, lol. Imagine what this could do for the commercial fishing industry as well. Even if it offers a 1% improvement in yields, that's massive at scale.
  • @bhuvan1036
    Did they just murder the already desd rabbit r1 and humane pin? 💀 💀
  • @mrdoublea99
    With every new version OpenAI releases I more and more get the feeling that soon J.A.R.V.I.S. and F.R.I.D.A.Y. won't be just fictional AIs from a movie anymore. Wow.
  • @orgy025
    When it went to read the bedtime story, the AI kind of sounded sarcastically enthusiastic like it knows that Barry or whatever his name is doesn't want to hear a bedtime story about robots and it's the dumbest thing that it's done in a hot minute
  • @seto007
    Not to downplay since this truly is an incredible innovation, but something odd I've noticed with the speech synthesis is that the voice usually starts off incredibly artificial sounding, and often starts off with an odd sound, but then very quickly starts to sound quite human in both inflection and tone, and this seems to be rather consistent. Is there a good explanation for this phenomenon? I haven't noticed it with any other AI voice synthesizers.
  • @erikbranmarino
    Italian PhD researcher here, the results look amazing! Of course, the Italian voice could be improved - as in general all the non-English language models, but, I mean, it's already very impressive!
  • What a time to be alive indeed! The voice integration and emotional speech makes all the difference.
  • @TheBigLou13
    It's free as in "it only costs your identity", since everything you ever asked or get to know will be tied to you. Apply your knowledge from other data krakens like Facebook about what they can/will do with it.