You can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
You can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
I have to disagree with that. To quote the comment I replied to:
AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued
Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.
Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?
This is one of the reasons I’ve disabled uefi by default with the noefi
kernel parameter, the other reason being the LogoFAIL exploit: https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface#Disable_UEFI_variable_access
Ok, great to know. Nuance doesn’t cross internet well, so your intention wasn’t clear, given all the uninformed hype & grifters around AI. Being somewhat blunt helps getting the intended point across better. ;)