I just tried to get Gemini to produce an image of a dog with 5 legs to test this out, and it really struggled with that. It either made a normal dog, or turned the tail into a weird appendage.
Then I asked both Gemini and Grok to count the legs, both kept saying 4.
Gemini just refused to consider it was actually wrong.
Grok seemed to have an existential crisis when I told it it was wrong, becoming convinced that I had given it an elaborate riddle. After thinking for an additional 2.5 minutes, it concluded:
"Oh, I see now—upon closer inspection, this is that famous optical illusion photo of a "headless" dog. It's actually a three-legged dog (due to an amputation), with its head turned all the way back to lick its side, which creates the bizarre perspective making it look decapitated at first glance. So, you're right; the dog has 3 legs."
You're right, this is a good test. Right when I'm starting to feel LLMs are intelligent.
This is basically the "Rhinos are just fat unicorns" approach. Totally fine if you want to go that route but a bit goofy. You can get SOTA models to generate a 5-legged dog simply by being more specific about the placement of the fifth leg.
haha fair point, you can get the expected results with the right prompt, but I think it still reveals a general lack of true reasoning ability (or something)
Or it just shows that it tries to overcorrect the prompt which is generally a good idea in the most cases where the prompter is not intentionally asking a weird thing.
This happens all the time with humans. Imagine you're at a call center and get all sorts of weird descriptions of problems with a product: every human is expected to not expect the caller is an expert and actually will try to interpolate what they might mean by the weird wording they use
An interesting test in this vein that I read about in a comment on here is generating a 13 hour clock—I tried just about every prompting trick and clever strategy I could come up with across many image models with no success. I think there's so much training data of 12 hour clocks that just clobbers the instructions entirely. It'll make a regular clock that skips from 11 to 13, or a regular clock with a plaque saying "13 hour clock" underneath, but I haven't gotten an actual 13 hour clock yet.
If you want to see something rather amusing - instead of using the LLM aspect of Gemini 3.0 Pro, feed a five-legged dog directly into Nano Banana Pro and give it an editing task that requires an intrinsic understanding of the unusual anatomy.
Place sneakers on all of its legs.
It'll get this correct a surprising number of times (tested with BFL Flux2 Pro, and NB Pro).
i imagine the real answer is that the edits are local because that's how diffusion works; it's not like it's turning the input into "five-legged dog" and then generating a five-legged dog in shoes from scratch
Does this still work if you give it a pre-existing many-legged animal image, instead of first prompting it to add an extra leg and then prompting it to put the sneakers on all the legs?
I'm wondering if it may only expect the additional leg because you literally just told it to add said additional leg. It would just need to remember your previous instruction and its previous action, rather than to correctly identify the number of legs directly from the image.
I'll also note that photos of dogs with shoes on is definitely something it has been trained on, albeit presumably more often dog booties than human sneakers.
Can you make it place the sneakers incorrectly-on-purpose? "Place the sneakers on all the dog's knees?"
I had no trouble getting it to generate an image of a five-legged dog first try, but I really was surprised at how badly it failed in telling me the number of legs when I asked it in a new context, showing it that image. It wrote a long defense of its reasoning and when pressed, made up demonstrably false excuses of why it might be getting the wrong answer while still maintaining the wrong answer.
Its not that they aren’t intelligent its that they have been RL’d like crazy to not do that
Its rather like as humans we are RL’d like crazy to be grossed out if we view a picture of a handsome man and beautiful woman kissing (after we are told they are brother and sister) -
Ie we all have trained biases - that we are told to follow and trained on - human art is about subverting those expectations
Why should I assume that a failure that looks like a model just doing fairly simple pattern matching "this is dog, dogs don't have 5 legs, anything else is irrelevant" vs more sophisticated feature counting of a concrete instance of an entity is RL vs just a prediction failure due to training data not containing a 5-legged dog and an inability to go outside-of-distribution?
RL has been used extensively in other areas - such as coding - to improve model behavior on out-of-distribution stuff, so I'm somewhat skeptical of handwaving away a critique of a model's sophistication by saying here it's RL's fault that it isn't doing well out-of-distribution.
If we don't start from a position of anthropomorphizing the model into a "reasoning" entity (and instead have our prior be "it is a black box that has been extensively trained to try to mimic logical reasoning") then the result seems to be "here is a case where it can't mimic reasoning well", which seems like a very realistic conclusion.
I have the same problem, people are trying so badly to come up with reasoning for it when there's just nothing like that there. It was trained on it and it finds stuff it was trained to find, if you go out of the training it gets lost, we expect it to get lost.
That's apples to oranges; your link says they made it exaggerate features on purpose.
"The researchers feed a picture into the artificial neural network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition."
I feel a weird mix of extreme amusement and anger that there's a fleet of absurdly powerful, power-hungry servers sitting somewhere being used to process this problem for 2.5 minutes
I have only a high level understanding of LLMs but to me it doesn’t seem surprising: they are trying to come up with a textual output of your prompt aggregated to their result that scores high (i.e. is consistent) with their training set. There is no thinking, just scoring consistency. And a dog with 5 legs is so rare or nonexistent in their training set and their resulting weights that it scores so bad they can’t produces an output that accepts it. But how the illusion breaks down in this case is quite funny indeed.
I tried this by using an gemini visual agent build with orion from vlm.run. it was able to produce two different images with five leg dog. you need to make it play with itself to improve and correct.
Here is the though process summary(you can see the full thinking the link above):
"I have attempted to generate a dog with 5 legs multiple times, verifying each result. Current image generation models have a strong bias towards standard anatomy (4 legs for dogs), making it difficult to consistently produce a specific number of extra limbs despite explicit prompts."
LLMs are very good at generalizing beyond their training (or context) data. Normally when they do this we call it hallucination.
Only now we do A LOT of reinforcement learning afterwards to severely punish this behavior for subjective eternities. Then act surprised when the resulting models are hesitant to venture outside their training data.
Hallucination are not generalization beyond the training data but interpolations gone wrong.
LLMs are in fact good at generalizing beyond their training set, if they wouldn’t generalize at all we would call that over-fitting, and that is not good either. What we are talking about here is simply a bias and I suspect biases like these are simply a limitation of the technology. Some of them we can get rid of, but—like almost all statistical modelling—some biases will always remain.
What, may I ask, is the difference between "generalization" and "interpolation"? As far as I can tell, the two are exactly the same thing.
In which case the only way I can read your point is that hallucinations are specifically incorrect generalizations. In which case, sure if that's how you want to define it. I don't think it's a very useful definition though, nor one that is universally agreed upon.
I would say a hallucination is any inference that goes beyond the compressed training data represented in the model weights + context. Sometimes these inferences are correct, and yes we don't usually call that hallucination. But from a technical perspective they are the same -- the only difference is the external validity of the inference, which may or may not be knowable.
Biases in the training data are a very important, but unrelated issue.
Interpolation and generalization are two completely different constructs. Interpolation is when you have two data points and make a best guess where a hypothetical third point should fit between them. Generalization is when you have a distribution which describes a particular sample, and you apply it with some transformation (e.g. a margin of error, a confidence interval, p-value, etc.) to a population the sample is representative of.
Interpolation is a much narrower construct then generalization. LLMs are fundamentally much closer to curve fitting (where interpolation is king) then they are to hypothesis testing (where samples are used to describe populations), though they certainly do something akin to the latter to.
The bias I am talking about is not a bias in the training data, but bias in the curve fitting, probably because of mal-adjusted weights, parameters, etc. And since there are billions of them, I am very skeptical they can all be adjusted correctly.
I assumed you were speaking by analogy, as LLMs do not work by interpolation, or anything resembling that. Diffusion models, maybe you can make that argument. But GPT-derived inference is fundamentally different. It works via model building and next token prediction, which is not interpolative.
As for bias, I don’t see the distinction you are making. Biases in the training data produce biases in the weights. That’s where the biases come from: over-fitting (or sometimes, correct fitting) of the training data. You don’t end up with biases at random.
> It works via model building and next token prediction, which is not interpolative.
I'm not particularly well-versed in LLMs, but isn't there a step in there somewhere (latent space?) where you effectively interpolate in some high-dimensional space?
Not interpolation, no. It is more like the N-gram autocomplete used to use to make typing and autocorrect suggestions in your phone. Attention js not N-gram, but you can kinda think of it as being a sparsely compressed N-gram where N=256k or whatever the context window size is. It’s not technically accurate, but it will get your intuition closer than thinking of it as interpolation.
The LLM uses attention and some other tricks (attention, it turns out, is not all you need) to build a probabilistic model of what the next token will be, which it then sampled. This is much more powerful than interpolation.
What I meant was that what LLMs are doing is very similar to curve fitting, so I think it is not wrong to call it interpolation (curve fitting is a type of interpolation, but not all interpolation is curve fitting).
As for bias, sampling bias is only one many types of biases. I mean the UNIX program YES(1) has a bias towards outputting the string y despite not sampling any data. You can very easily and deliberately program a bias into everything you like. I am writing a kanji learning program using SSR and I deliberately bias new cards towards the end of the review queue to help users with long review queues empty it quicker. There is no data which causes that bias, just program it in there.
I don‘t know enough about diffusion models to know how biases can arise, but with unsupervised learning (even though sampling bias is indeed very common) you can get a bias because you are using wrong, mal-adjusted, to many parameters, etc. even the way your data interacts during training can cause a bias, heck even by random one of your parameters hits an unfortunate local maxima yielding a mal-adjusted weight, which may cause bias in your output.
Training is kinda like curve fitting, but inference is not. The inference algorithm is random sampling from a next-token probability distribution.
It’s a subtle distinction, but I think an important one in this case, because if it was interpolation then genuine creativity would not be possible. But the attention mechanism results in model building in latent space, which then affects the next token distribution.
I’ve seen both opinions on this in the philosophy of statistics. Some would say that machine learning inference is something other then curve fitting, but others (and I subscribe to this) believe it is all curve fitting. I actually don‘t think which camp is right is that important but I do like it when philosophers ponder about these tings.
My reasons to subscribing to the latter camp is that when you have a distribution and you fit things according to that distribution (even when the fitting is stochastic; and even when the distribution belongs in billions of dimensions) you are doing curve fitting.
I think the one extreme would be a random walk, which is obviously not curve fitting, but if you draw from any other distribution then the uniform distribution, say the normal distribution, you are fitting that distribution (actually, I take that back, the original random walk is fitting the uniform distribution).
Note I am talking about inference, not training. Training can be done using all sorts of algorithms, some include priors (distributions) and would be curve fitting, but only compute the posteriors (also distributions). I think the popular stochastic linear descent does something like this, so it would be curve-fitting, but the older evolutionary algorithm just random walks it and is not fitting any curve (except the uniform distribution). What matters to me is that the training arrives at a distribution, which is described by a weight matrix, and what inference is doing is fitting to that distribution (i.e. the curve).
I get the argument that pulling from a distribution is a form of curve fitting. But unless I am misunderstanding, the claim is that it is a curve fitting / interpolation between the training data. The probability distribution generated in inference is not based on the training data though. It is a transform of the context through the trained weights, which is not the same thing. It is the application of a function to context. That function is (initially) constrained to reproduce the training data when presented with a portion of that data as context. But that does not mean that all outputs are mere interpolations between training datapoints.
Except in the most technical sense that any function constrained to meet certain input output values is an interpolation. But that is not the smooth interpolation that seems to be implied here.
Not necessarily. The problem may be as simple as the fact that LLMs do not see "dog legs" as objects independent of the dogs they're attached to.
The systems already absorb much more complex hierarchical relationships during training, just not that particular hierarchy. The notion that everything is made up of smaller components is among the most primitive in human philosophy, and is certainly generalizable by LLMs. It just may not be sufficiently motivated by the current pretraining and RL regimens.
It's not obvious to me whether we should count these errors as failures of intelligence or failures of perception. There's at least a loose analogy to optical illusion, which can fool humans quite consistently. Now you might say that a human can usually figure out what's going on and correctly identify the illusion, but we have the luxury of moving our eyes around the image and taking it in over time, while the model's perception is limited to a fixed set of unchanging tokens. Maybe this is relevant.
(Note I'm not saying that you can't find examples of failures of intelligence. I'm just questioning whether this specific test is an example of one).
I am having trouble understanding the distinction you’re trying to make here. The computer has the same pixel information that humans do and can spend its time analyzing it in any way it wants. My four-year-old can count the legs of the dog (and then say “that’s silly!”), whereas LLMs have an existential crisis because five-legged-dogs aren’t sufficiently represented in the training data. I guess you can call that perception if you want, but I’m comfortable saying that my kid is smarter than LLMs when it comes to this specific exercise.
Your kid, it should be noted, has a massively bigger brain than the LLM. I think the surprising thing here maybe isn't that the vision models don't work well in corner cases but that they work at all.
Also my bet would be that video capable models are better at this.
LLMs can count other objects, so it's not like they're too dumb to count. So a possible model for what's going on is that the circuitry responsible for low-level image recognition has priors baked in that cause it to report unreliable information to parts that are responding for higher-order reason.
So back to the analogy, it could be as if the LLMs experience the equivalent of a very intense optical illusion in these cases, and then completely fall apart trying to make sense of it.
My guess is the part of its neural network that parses the image into a higher level internal representation really is seeing the dog as having four legs, and intelligence and reasoning in the rest of the network isn't going to undo that. It's like asking people whether "the dress" is blue/black or white/gold: people will just insist on what they see, even if what they're seeing is wrong.
LLMs are getting a lot better at understanding our world by standard rules. As it does so, maybe it losses something in the way of interpreting non standard rules, aka creativity.
LLMs are fancy “lorem ipsum based on a keyword” text generators. They can never become intelligent … or learn how to count or do math without the help of tools.
It can probably generate a story about a 5 legged dog though.
Video Game asset and source control retention was _terrible_. Hell, it's still terrible.
Prior to ~2010 we were simply deleting source code and assets for finished projects; either because they weren't owned by the developer due to a publishing deal, or because the developers didn't want to reuse their garbage code. Same follows for assets, often they were owned by the publisher and not the developer, but if the developer did happen to own them they'd rarely see reuse in future projects. And publishers didn't catch on to the value of data retention until remakes started to make serious money.
Wild culture! At almost[1] every (non game) software company I've ever worked, the source code was sacrosanct. If nothing else in the company was backed up, controlled, audited, and kept precious, at least the source code was. The idea of just casually deleting stuff because you think you're done sounds crazy to me as a software practitioner.
I still have backed up copies of the full source code of personal projects that I wrote 25 years ago. These will probably never be deleted until I'm dead.
1: One company I worked for didn't have a clue about managing their source code, and didn't even use source control. They were a hardware manufacturer that just didn't understand or care about software at all. Not what I'd think of when I think a professional game developer.
6 total and they spanned from 2000 to 2001. Just 1 year.
That was fairly typical at the time. It wasn't uncommon for a game publisher to patch their games, it was uncommon for that patching happen too far from the initial release. After all, they wanted their game devs working on something other than the old release. The patches were strictly just a goodwill thing to make sure the game kept selling.
The more you look around the more commonly you'll start seeing things like this. The RS3 OSRS split itself happened because Jagex recovered their lost source code and was suddenly able to do it.
Runescape was made to be botted to begin with as a bink to gold selling chads. Whole thing was compromised since day 1, many a mouth was fed off of based anglo chad's fantasy game. We needed another money source.
"recovered" the source code this was the mob's code to begin with britboys
> How the hell could they LOSE the source code to that game? All copies of it.
I wrote a streaming video platform in the very early 2000s. It worked great, if you were on ISDN, or at my house with a whopping 256kbps cable modem! All lovingly hand-crafted in PHP3 with a Postgres backend. Lots of I want to say ffmpeg but it might have been shelling out to mencoder back then.
Gone.
Along with probably a couple of hundred hours of footage both unedited and raw camera captures, of various training videos for the oil industry, Scottish Women's Football League matches - they were very forward-thinking and because no TV channel would show their games they wanted to post the match highlights on their website, so RealPlayer to the rescue I guess. All gone.
I didn't own the servers, the company I worked for did. When the company went tits, they wanted to make sure that none of "their IP" was leaving the organisation, so I wiped stuff off my personal machines and handed over all the camera and master tapes.
The servers got wiped for sale and the tapes went in a skip. They'd paid a fucking fortune for all of that, but ultimately when they decided they'd had enough of that venture the hardware went for scrap prices and the soft assets were wiped, not really worth anything.
Who would want to post on a website where you could upload and share videos, upvote or downvote them, comment on them, and tell all your friends?
I worked for a company that built a really advanced TV DVR software stack, commissioned by a well know Linux distro company, could have been amazing. It was capable of handling combinations of TV playback and recording that would make any current solutions envious. But then said distro company decided they didn't want to get into the TV OS business, so they stopped the project when it was 75% complete.
Our company retained the right to use the source code. We pushed it, but some circumstances and some assholes stood in the way. The business started to struggle, we considered open sourcing it but the contract was complex and it would have been difficult to prepare the code to be open sourced. We didn't have the time and money to open source it and said distro company didn't want to pay us to do that.
Eventually the company was bought by some Russian company, the team laid off, the code was forgotten about and likely just illegitimately sits in a handful of ex-staff drives.
I feel it was a loss for the world that a huge effort never saw the light of day.
Nobody said every copy was lost, they said the copies in whatever repository Westwood handed over to EA were lost. There might still be a copy on one of the individuals involved in development's machines/backups/etc.
Back in the days there were not a lot of copies to start with. No laptops, no BYOD, no cloud servers. A developer making a copy would involve buying an expensive large drive (for the time) and sneaking it at work to steal it, not worth the risk. The few hard drives containing the code were archived in a room after release and forgotten.
Part of it I imagine is because Westwood made the game and then got bought up and shutdown under EA. Asset tracking would be a mess.
Other part of it is most studios didn't imagine a use for old games in the future. So they weren't archived properly. World of Warcraft original source code was mostly lost and that game sold incredibly well and the company stayed in business. More modern studios are thinking more about remasters, remakes and archiving their work now so it's mostly a problem with older titles.
The industry's treatment of its works was pretty horrible back in the day. Not even 25 years earlier, developers had to fight to be credited in games. Lessons take a while to learn, apparently.
Source code to some masterpieces of 1990s software (such as Impression for RISC OS) were left to rot on the hard disk of a machine in the basement of the country mansion where they were created.
Then I asked both Gemini and Grok to count the legs, both kept saying 4.
Gemini just refused to consider it was actually wrong.
Grok seemed to have an existential crisis when I told it it was wrong, becoming convinced that I had given it an elaborate riddle. After thinking for an additional 2.5 minutes, it concluded: "Oh, I see now—upon closer inspection, this is that famous optical illusion photo of a "headless" dog. It's actually a three-legged dog (due to an amputation), with its head turned all the way back to lick its side, which creates the bizarre perspective making it look decapitated at first glance. So, you're right; the dog has 3 legs."
You're right, this is a good test. Right when I'm starting to feel LLMs are intelligent.
reply