• Anomalocaris@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    i thought it would be easy, but not that easy.

    When it came out I played with getting it to confess that he’s sentient, and he never would budge, he was stubborn and stuck to is concepts. I tried again, and within a few messages it was already agreeing that it is sentient. they definitely upped it’s “yes man” attitude

    • SpaceDuck@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      It’s near unusable if you don’t start with an initial prompt toning down all the “pick me” attitude at this point. Asking it a simple question, it overly explains, and if you follow up it’s like: “That is very insightful!”.

    • markovs_gun@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Yeah I’ve noticed it’s way more sycophantic than it used to be, but it’s also easier to get it to say things it’s not supposed to by not going at it directly. So like I started by asking about a legitimate religious topic and then acted like it was inflaming existing delusions of grandeur. If you go to ChatGPT and say “I am God” it will say “no you aren’t” but if you do what I did and start with something seemingly innocuous it won’t fight as hard. Fundamentally this is because it doesn’t have any thoughts, beliefs, or feelings that it can stand behind, it’s just a text machine. But that’s not how it’s marketed or how people interact with it

      • Anomalocaris@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        it’s a matter of time before some kids poison themselves by trying to make drugs using recipes they got by “jailbreaking” some LLM.