• CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    5 days ago

    Excuse me if I don’t want to let corporations redefine my language.

    I think there can be AI, but this is not it. It’s one of the reasons I don’t want to call generative or predictive models AI.

    • Carrot@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      I guess what I’m saying is that the colloquial definition of “AI” hasn’t changed with the rise of LLMs. “AI” has been used to mean “computers that can make decisions” for at least 20 years. I don’t know if you play video games, but “AI” has been synonymous with “Bot” or “NPC” in that space for a long time now.

      When I was in college, I took classes on Artificial Neural Networks, a good several years before LLMs were released to the public. While you wouldn’t find it in a textbook, a lot of the students called ANNs “AI”.

      Hell, the term “Artificial General Intelligence” was coined in 2007 to replace “AI” for the definition you are using, since people had started using “AI” a lot looser. That was 18 years ago, long before LLMs.

      I agree that the corporations calling their LLMs AI is misleading and manipulative, hell I even could agree that they shouldn’t be allowed to, but let’s not pretend that they have changed the definition of AI. That is fundamentally untrue.

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 days ago

      It ain’t only corporations, it’s casual, intuitive, everyday speakers—the community that owns the language—arriving there naturally from the regular meaning of individual words: They see a work that appears to be created by some form of intelligence/creativity. No natural intelligence created it. Hence, a work of artificial intelligence.

      See? Not that hard. No need to be difficult about it. Nitpicking a casual speaker over it is bound to earn you well-deserved disdain.

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      5 days ago

      out of curiosity, what measurable output of a system would you need to observe as evidence that it’s AI?

      • AppleTea@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        “AI” in fiction has meant a machine with a mind like what people have. It’s had that meaning for decades. Very recently, there are programmes that do predictive text like what your phone does, but large. You can call the predictive text programme an “AI”, but as the novelty wears off, it’s gonna sound more and more like advertising than a real description.

        • Carrot@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          I has, but it also has meant a computer “making decisions” for decades as well. I would know, I’ve been using it that way for 20 years, especially in the gaming space. Playing against bots that even remotely feel like a person is playing has been “playing against the AI”

          Don’t get me wrong, I agree that the marketing being done today is pretty aggregious, and the folks doing it are 100% being manipulative by using the term “AI” in their marketing, but I don’t think they’ve used the term beyond a meaning it has already had for a long time.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          5 days ago

          I think it’s incredible that so much of what the human brain can do can be emulated with predictive models. It makes sense in retrospect – human brains are doing prediction at every level that we can model.

          • AppleTea@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            5 days ago

            A statistical model strings a sentence together with a great big web of statistical weights, settling onto the next most probable word, one by one. People write with the intent to share a meaning. It is not the same.

            That statistical (or “predictive”, if we’re gussying it up) model has no understanding in it - no more than any other programme. It’s a physical chain reaction, a calculation that runs until the sums even out to a state of rest. Wipe the web of statistical weights clean, and re-weigh them so the sums spit out the colour of pixels in a JPEG rather than the content of a .txt document.

            Hell, weigh the web at random and have it spit out nonsense numbers. It’ll do that for as long as keep the programme up. It will never ask you why you took the meaning out of its task. The machine makes no distinction between the sort of calculation you run it – people are what project meaning onto the blinking lights.

            • jsomae@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              5 days ago

              You could say the same thing about rewiring a human’s neurons randomly. It’s not the powerful argument you think it is.

              We don’t really know exactly how brains work. But when, say, Wernicke’s area is damaged (but not Broca’s area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there’s some part of our language process which is essentially no more or less powerful than an LLM.

              Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart – rather it’s that the MVP of intelligence is a lot lower a bar than anyone was expecting.

              • AppleTea@lemmy.zip
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                4 days ago

                You could say the same thing about rewiring a human’s neurons randomly

                Can you? One is editing a table of variables, the other is altering a brain by some magic hypothetical. Even if you could, the person you do it to is gonna be cross with you – the programme, meanwhile, is still just a programme. People who’ve had damage to Wernicke’s area are still attempting to communicate meaningful thoughts, just because the signal is scrambled doesn’t mean the intent isn’t still there.