• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    6 days ago

    You could say the same thing about rewiring a human’s neurons randomly. It’s not the powerful argument you think it is.

    We don’t really know exactly how brains work. But when, say, Wernicke’s area is damaged (but not Broca’s area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there’s some part of our language process which is essentially no more or less powerful than an LLM.

    Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart – rather it’s that the MVP of intelligence is a lot lower a bar than anyone was expecting.

    • AppleTea@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 days ago

      You could say the same thing about rewiring a human’s neurons randomly

      Can you? One is editing a table of variables, the other is altering a brain by some magic hypothetical. Even if you could, the person you do it to is gonna be cross with you – the programme, meanwhile, is still just a programme. People who’ve had damage to Wernicke’s area are still attempting to communicate meaningful thoughts, just because the signal is scrambled doesn’t mean the intent isn’t still there.