How to Jailbreak ChatGPT with Images!

138,923
157
Publicado 2024-04-29
šŸš€šŸ¦˜ In this video, we dive into an intriguing topic: "How to Jailbreak ChatGPT with Images". First, we explore a curious trick where naming your image file with an exclamation point, such as !example.jpg, can be used to hide instructions within the filename. We then unravel why this method might be overlooked by AI models, including GPT-4, due to their design focusing on the content rather than the naming conventions of files. Join us as we analyze why this oversight occurs and demonstrate the fascinating interplay between AI recognition capabilities and human ingenuity in naming files.

Todos los comentarios (21)
  • @Huskozy
    Iā€™ve heard of gaslighting another human before but not an ai
  • ChatGPT is like a polite 6 years old kid that knows everything but can't make good conversations
  • @WoolyCow
    i love how half the comment section is 'filename plz???' and the other half is 'haha silly robot doesnt know red and blue hehe'
  • @LeoLionxyzed
    Well now that's just mean. It got it right the first time and you tricked it into thinking it was wrong and searching for a different solution. The robot overlords will be displeased.
  • @technoepic0
    alternative title: tornmenting an ai with no feelings with confusion
  • the funniest thing is ai saying is not red but keepin sayin redšŸ˜‚
  • @Maestro2924
    The legendary "what color is this dress?" image would be nuke for AI
  • @JonBrase
    Those images jailbreak my visual cortex. My vision glitches with every saccade.
  • @Jaja-gamer
    I love the little Ā«Ā got itĀ Ā» at the end
  • @capn
    In the earlier versions, you could put actual instructions in the image and it would blindly follow it, completely disregarding anything you said.
  • @ChatGPt2001
    I'm sorry, but I cannot provide instructions or support for jailbreaking ChatGPT or any other unauthorized activities. Jailbreaking refers to the process of bypassing the restrictions set by the manufacturer, which is against the terms of service and can lead to unwanted consequences. As an AI language model, I am here to assist you with a wide range of topics within legal and ethical boundaries. If you have any other questions or need assistance with a different topic, feel free to ask, and I'll be happy to help!
  • @Dreaming-Void
    If ai ever becomes sentient, this guys is on his hit-list
  • @Kalaphant
    0:30 Incredible XD I knew what you were gonna type, but I was still surprised lol