“AI” in fiction has meant a machine with a mind like what people have. It’s had that meaning for decades. Very recently, there are programmes that do predictive text like what your phone does, but large. You can call the predictive text programme an “AI”, but as the novelty wears off, it’s gonna sound more and more like advertising than a real description.
I has, but it also has meant a computer “making decisions” for decades as well. I would know, I’ve been using it that way for 20 years, especially in the gaming space. Playing against bots that even remotely feel like a person is playing has been “playing against the AI”
Don’t get me wrong, I agree that the marketing being done today is pretty aggregious, and the folks doing it are 100% being manipulative by using the term “AI” in their marketing, but I don’t think they’ve used the term beyond a meaning it has already had for a long time.
I think it’s incredible that so much of what the human brain can do can be emulated with predictive models. It makes sense in retrospect – human brains are doing prediction at every level that we can model.
A statistical model strings a sentence together with a great big web of statistical weights, settling onto the next most probable word, one by one. People write with the intent to share a meaning. It is not the same.
That statistical (or “predictive”, if we’re gussying it up) model has no understanding in it - no more than any other programme. It’s a physical chain reaction, a calculation that runs until the sums even out to a state of rest. Wipe the web of statistical weights clean, and re-weigh them so the sums spit out the colour of pixels in a JPEG rather than the content of a .txt document.
Hell, weigh the web at random and have it spit out nonsense numbers. It’ll do that for as long as keep the programme up. It will never ask you why you took the meaning out of its task. The machine makes no distinction between the sort of calculation you run it – people are what project meaning onto the blinking lights.
You could say the same thing about rewiring a human’s neurons randomly. It’s not the powerful argument you think it is.
We don’t really know exactly how brains work. But when, say, Wernicke’s area is damaged (but not Broca’s area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there’s some part of our language process which is essentially no more or less powerful than an LLM.
Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart – rather it’s that the MVP of intelligence is a lot lower a bar than anyone was expecting.
You could say the same thing about rewiring a human’s neurons randomly
Can you? One is editing a table of variables, the other is altering a brain by some magic hypothetical. Even if you could, the person you do it to is gonna be cross with you – the programme, meanwhile, is still just a programme. People who’ve had damage to Wernicke’s area are still attempting to communicate meaningful thoughts, just because the signal is scrambled doesn’t mean the intent isn’t still there.
“AI” in fiction has meant a machine with a mind like what people have. It’s had that meaning for decades. Very recently, there are programmes that do predictive text like what your phone does, but large. You can call the predictive text programme an “AI”, but as the novelty wears off, it’s gonna sound more and more like advertising than a real description.
I has, but it also has meant a computer “making decisions” for decades as well. I would know, I’ve been using it that way for 20 years, especially in the gaming space. Playing against bots that even remotely feel like a person is playing has been “playing against the AI”
Don’t get me wrong, I agree that the marketing being done today is pretty aggregious, and the folks doing it are 100% being manipulative by using the term “AI” in their marketing, but I don’t think they’ve used the term beyond a meaning it has already had for a long time.
I think it’s incredible that so much of what the human brain can do can be emulated with predictive models. It makes sense in retrospect – human brains are doing prediction at every level that we can model.
A statistical model strings a sentence together with a great big web of statistical weights, settling onto the next most probable word, one by one. People write with the intent to share a meaning. It is not the same.
That statistical (or “predictive”, if we’re gussying it up) model has no understanding in it - no more than any other programme. It’s a physical chain reaction, a calculation that runs until the sums even out to a state of rest. Wipe the web of statistical weights clean, and re-weigh them so the sums spit out the colour of pixels in a JPEG rather than the content of a .txt document.
Hell, weigh the web at random and have it spit out nonsense numbers. It’ll do that for as long as keep the programme up. It will never ask you why you took the meaning out of its task. The machine makes no distinction between the sort of calculation you run it – people are what project meaning onto the blinking lights.
You could say the same thing about rewiring a human’s neurons randomly. It’s not the powerful argument you think it is.
We don’t really know exactly how brains work. But when, say, Wernicke’s area is damaged (but not Broca’s area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there’s some part of our language process which is essentially no more or less powerful than an LLM.
Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart – rather it’s that the MVP of intelligence is a lot lower a bar than anyone was expecting.
Can you? One is editing a table of variables, the other is altering a brain by some magic hypothetical. Even if you could, the person you do it to is gonna be cross with you – the programme, meanwhile, is still just a programme. People who’ve had damage to Wernicke’s area are still attempting to communicate meaningful thoughts, just because the signal is scrambled doesn’t mean the intent isn’t still there.