It’s only from spells and only the player itself is immune from them. I don’t think this would even see play in YGO.
It’s only from spells and only the player itself is immune from them. I don’t think this would even see play in YGO.
From what I remember and what a quick search on the internet confirmed, B didn’t actually deny her anything. He actually went out of his way to do as much good for her as he could. He claims to have replied “Language.” because he knew other people at NASA with more say on her job would find her, which would get her into trouble (and they did find her even before his first Tweet).
I’m guessing they just take the correct prefix (the first 3 letters of the correct month) and append “tember”, no matter the month.
Sure. You have to solve it from inside out:
The huge coincidental part is that ඞ lies at a position that can be reached by a cumulative sum of integers between 0 and a given integer. From there on it’s only a question of finding a way to feed that integer into chr(sum(range(x)))
Those are typcially the ones without any prefix
With the notable exception of the kg…for some inexplicable reason.
If you wanna see a language model (almost) exclusively trained on 4chan, here you go.
Presumably. Wouldn’t take much to fake that though.
When were talking about teaching kids the alphabet we need to train both individual and applied letters
This is only slightly related but I once met a young (USAmerican) adult who thought the stripy horse animal’s name was pronounced zed-bra in British English and it was really hard to convince her otherwise. In her mind zebra was strongly connected to Z-bra, so of course if someone was to pronounce the letter “zed” it would turn into “zed-bra” and not just into “zeh-bra”.
Tell me you’re not tall without telling me you’re not tall.
Daily login bonus…
My bad, I wasn’t precise enough with what I wanted to say. Of course you can confirm (with astronomically high likelihood) that a screenshot of AI Overview is genuine if you get the same result with the same prompt.
What you can’t really do is prove the negative. If someone gets an output then replicating their prompt won’t necessarily give you the same output, for a multitude of reasons. e.g. it might take all other things Google knows about you into account, Google might have tweaked something in the last few minutes, the stochasticity of the model is leading to a different output, etc.
Also funny you bring up image generation, where this actually works too in some cases. For example they used the same prompt with multiple different seeds and if there’s a cluster of very similar output images, you can surmise that an image looking very close to that was in the training set.
Assuming AI Overview does not cache results, they would be generated at search-time for each user and “search-event” independently. Even recreating the same prompt would not guarantee a similar AI Overview, so there’s no way to confirm.
Edit: See my comment below for what I actually meant to say
Assuming we shrink all spacial dimensions equally: With Z, the diagonal will also shrink so that the two horizontal lines would be closer together and then you could not fit them into the original horizontal lines anymore. Only once you shrink the Z far enough that it would fit within the line-width could you fit it into itself again. X I and L all work at any arbitrary amount of shrinking though.
So is the example with the dogs/wolves and the example in the OP.
As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.
However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.
Eh, nothing I did was “figuring out which loophole [they] use”. I’d think most people in this thread talking about the mathematics that could make it a true statement are fully aware that the companies are not using any loophole and just say “above average” to save face. It’s simply a nice brain teaser to some people (myself included) to figure out under which circumstances the statement could be always true.
Also if you wanna be really pedantic, the math is not about the companies, but a debunking of the original Tweet which confidently yet incorrectly says that this statement couldn’t be always true.
It’s even simpler. A strictly increasing series will always have element n be higher than the average between any element<n and element n.
Or in other words, if the number of calls is increasing every day, it will always be above average no matter the window used. If you use slightly larger windows you can even have some local decreases and have it still be true, as long as the overall trend is increasing (which you’ve demonstrated the extreme case of).
Sorta. The function height(angle) needs to be continuous. From there it’s pretty clear why it works if you know the mean value theorem.