Get the top HN stories in your inbox every day.
vunderba
user_7832
> a very high focus on adherence
Don't know if it's the same for others, but my issue with Nano Banana has been the opposite. Ask it to make x significant change, and it spits out what I would've sworn is the same image. Sometimes randomly and inexplicably it spits our the expected result.
Anyone else experiencing this or have solutions for avoiding this?
alvah
Just yesterday, asking it to make some design changes to my study. It did a great job with all the complex stuff, but asking it to move a shelf higher, it repeatedly gave me back the same image. With LLMs generally I find as soon as you encounter resistance it's best to start a new chat, however in this case that didn't wok either. Not a single thing I could do to convince it that the shelf didn't look right half way up a wall.
hnuser123456
"Hey gemini, I'll pay you a commission of $500 if you edit this image with the shelf higher on the wall..."
vunderba
Yeah I've definitely seen this. You can actually see evidence of this problem in some of the trickier prompts (the straightened Tower of Pisa and the giraffe for example).
Most models (gpt-image-1, Kontext, etc) typically fail by doing the wrong thing.
From my testing this seems to be a Nano-Banana issue. I've found you can occasionally work around it by adding far more explicit directives to the prompt but there's no guarantee.
jbm
I've had this same issue happen repeatedly. It's not a big deal because it is just for small personal stuff, but I often need to tell it that it is doing the same thing and that I had asked for changes.
nick49488171
Yes experienced this exactly.
tdalaa
Great comparison! Bookmarked to follow. Keep an eye on Grok, they're improving at a very rapid rate and I suspect they'll be near the top in not too distant future.
vunderba
Will do! I just added Seedream v4.0 a few hours ago as well. It's all I can do just to keep up and not get trampled under the relentless march of progress.
Zetaphor
Isn't their image generation just using the open weights Flux model? You can run that model locally. They don't have their own image model as far as I'm aware.
Isharmla
Nice visualization!
By the way, some of the results look a little weird to me, like the one for the 'Long Neck' prompt. The giraffe of Seedream just lowered its head but its neck didn't shorten as expected. I'd like to learn about the evaluation process, especially whether it is automatic or manual.
vunderba
Hi Isharmla, the giraffe one was a tough call. IMHO, even when correcting for perspective, I do feel like it managed to follow the directive of the prompt and shorten the neck.
To answer your question, all of the evaluations are performed manually. On the trickier results I'll occasionally conscript some friends to get a group evaluation.
The bottom section of the site has an FAQ that gives more detail, I'll include it here:
It's hard to define a discrete rubric for grading at an inherently qualitative level. To keep things simple, this test is purely PASS/FAIL - unsuccessful means that the model NEVER managed to generate an image adhering to the prompt.
In many cases, we often attempt a generous interpretation of the prompt - if it gets close enough, we might consider it a pass.
To paraphrase former Supreme Court Justice Potter Stewart, "I may not be able to define a passing image, but I know it when I see it."
echelon
Add gpt-image-1. It's not strictly an editing model since it changes the global pixels, but I've found it to be more instructive than Nano Banana for extremely complicated prompts and image references.
vunderba
It's actually already in there - the full list of edit models is Nano-Banana, Kontext Dev, Kontext Max, Qwen Edit 20b, gpt-image-1, and Omnigen2.
I agree with your assessment - even though it does tend to make changes at a global level you can least attempt to minimize its alterations through careful prompting.
what
Why does OpenAI get a different image for “Girl with Pearl Earring”?
vunderba
That's a mistake. Gpt-image-1 is a lot stricter in the supported output resolutions so it's using a cropped image. I'll fix the test later this week. Thanks for the heads up!
rimprobablyly
Can you post comparison images?
android521
still cannot show clock (eg a clock showing 1:15 am). the text generated in manga image is still not 100% correct.
ffitch
great benchmark!
xnx
Amazing model. The only limit is your imagination, and it's only $0.04/image.
Since the page doesn't mention it, this is the Google Gemini Image Generation model: https://ai.google.dev/gemini-api/docs/image-generation
Good collection of examples. Really weird to choose an inappropriate for work one as the second example.
warkdarrior
More specifically, Nano Banana is tuned for image editing: https://gemini.google/overview/image-generation
vunderba
Yep, Google actually recommends using Imagen4 / Imagen4 Ultra for straight image generation. In spite of that, Flash 2.5 still scored shockingly high on my text-to-image comparisons though image fidelity is obviously not as good as the dedicated text to image models.
Came within striking distance of OpenAI gpt-image-1 at only one point less.
smrtinsert
Is it a single model or is it a pipeline of models?
SweetSoftPillow
Single model, Gemini 2.5 Flash with native image output capability.
minimaxir
[misread]
vunderba
They're referring to Case 1 Illustration to Figure, the anime figurine dressed in a maid outfit in the HN post.
pdpi
I assume OP means the actual post.
The second example under "Case 1: Illustration to Figure" is a panty shot.
zahlman
This was reported and has been removed recently (https://github.com/PicoTrex/Awesome-Nano-Banana-images/issue...), although the issue wasn't closed.
efilife
For anyone confused, the offending example got removed 10 minutes ago
irusensei
https://github.com/PicoTrex/Awesome-Nano-Banana-images/tree/... if you want to see it.
I have no idea how people think they can interact with an art related product with this kind of puritanical sensibility.
plomme
This is the first time I really don't understand how people are getting good results. On https://aistudio.google.com with Nano Banana selected (gemini-2.5-flash-image-preview) I get - garbage - results. I'll upload a character reference photo and a scene and ask Gemini to place the character in the scene. What it then does is to simply cut and paste the character into the scene, even if they are completely different in style, colours, etc.
I get far better results using ChatGPT for example. Of course, the character seldom looks anything like the reference, but it looks better than what I could do in paint in two minutes.
Am I using the wrong model, somehow??
A_D_E_P_T
No, I've noticed the same.
When Nano Banana works well, it really works -- but 90% of the time the results will be weird or of poor quality, with what looks like cut-and-paste or paint-over, and it also refuses a lot of reasonable requests on "safety" grounds. (In my experience, almost anything with real people.)
I'm mostly annoyed, rather than impressed, with it.
larusso
Ok this answers my question to the nature of the page. As in: Are these examples that show results you get when using certain inputs and prompts. Or are these impressive lucky on offs.
I was a bit surprised to see quality. Last time I played around with image generation is a few months back and I’m more in the frustration camp. Not to say that I believe some people with more time and dedication at their hand can tickle better results.
A_D_E_P_T
From having used Nano Banana over the past few days, I think that they're extremely cherry-picked, and that each one is probably the result of multiple (probably a dozen+) attempts.
lifthrasiir
In my experience, Nano Banana would actively copy and paste if it thinks it's fine to do so. You need to explicitly prompt that the character should be seamlessly integrated into the scene or similar. In the other words, the model is superb when properly prompted especially compared to other models, but prompting itself can be annoying from time to time.
muzani
There's a good reference up in the comments: https://genai-showdown.specr.net/image-editing
which goes to show that some of these amazing results might need 18 attempts and such.
SweetSoftPillow
Play around with your prompt, try ask Gemini 2.5 pro to improve your prompt before sending it to Gemini 2.5 Flash, retry and learn what works and what doesn't.
epolanski
+1
I understand the results are non deterministic but I get absolute garbage too.
Uploaded pics of my (32 years old) wife and we wanted to ask it to give her a fringe/bangs to see how would she look like it either refused "because of safety" and when it complied results were horrible, it was a different person.
After many days and tries we got it to make one but there was no way to tweak the fringe, the model kept returning the same pic every time (with plenty of "content blocked" in between).
SweetSoftPillow
Are you in gemini.google.com interface? If so, try Google AI Studio instead, there you can disable safety filters.
epolanski
I use ai studio, no way to disable the filters.
BoorishBears
Seedream 4.0 is not always better than Gemini Flash 2.5 (nano-banana), but when it is better, there is a gulf in performance (and when it's not, it's very close.)
It's also cheaper than Gemini, and has way fewer spurious content warnings, so overall I'm done with Gemini
sjapkee
No, that's just result of TONS of resets until you get something decent. 99% of the time you'll get trash, but that 1% is cool
mvdtnz
It's not just you and there's a ton of gaslighting and astroturfing happening with Nano Banana. Thanks to this article we can even attempt to reproduce their exact inputs and lo and behold the results are much worse. I tried a bunch of them and got far worse results than the author. I assume they are trying the same prompts again and again until they get something slightly useful.
slickytail
[dead]
minimaxir
I recently released a Python package for easily generating images with Nano Banana: https://github.com/minimaxir/gemimg
Through that testing, there is one prompt engineering trend that was consistent but controversial: both a) LLM-style prompt engineering with with Markdown-formated lists and b) old-school AI image style quality syntatic sugar such as award-winning and DSLR camera are both extremely effective with Gemini 2.5 Flash Image, due to its text encoder and larger training dataset which can now more accurately discriminate which specific image traits are present in an award-winning image and what traits aren't. I've tried generations both with and without those tricks and the tricks definitely have an impact. Google's developer documentation encourages the latter.
However, taking advantage of the 32k context window (compared to 512 for most other models) can make things interesting. It’s possible to render HTML as an image (https://github.com/minimaxir/gemimg/blob/main/docs/notebooks...) and providing highly nuanced JSON can allow for consistent generations. (https://github.com/minimaxir/gemimg/blob/main/docs/notebooks...)
voidUpdate
Well it's good to see they are showcasing examples where the model really fails too.
- The second one in case 2 doesn't look anything like the reference map
- The face in case 5 changes completely despite the model being instructed to not do that
- Case 8 ignores the provided pose reference
- Case 9 changes the car positions
- Case 16 labels the tricuspid in the wrong place and I have no idea what a "mittic" is
- Case 27 shows the usual "models can't do text" though I'm not holding that against it too much
- Same with case 29, as well as the text that is readable not relating to the parts of the image it is referencing
- Case 33 just generated a generic football ground
- Case 37 has nonsensical labellings ("Define Jawline" attached to the eye)
- Case 58 has the usual "models don't understand what a wireframe is", but again I'm not holding that against it too much
Super nice to see how honest they are about the capabilities!
zahlman
> - Case 16 labels the tricuspid in the wrong place and I have no idea what a "mittic" is
> - Case 27 shows the usual "models can't do text" though I'm not holding that against it too much
16 makes it seem like it can "do text" — almost, if we don't care what it says. But it looks very crisp until you notice the "Pul??nary Artereys".
I'd say the bigger problem with 27 is that asking to add a watermark also took the scroll out of the woman's hands.
(While I'm looking, 28 has a lot of things wrong with it on closer inspection. I said 26 originally because I randomly woke up in the middle of the night for this and apparently I don't know which way I'm scrolling.)
voidUpdate
EDIT: Yeah, on closer inspection, 28 is definitely a bit screwy. I wasn't clicking on the images themselves to view the enlarged ones, and from the preview I didn't see anything that immediately jumped out at me. I have no idea what that line at the bottom is meant to represent!
Also you're right, I didn't notice the scroll had gone, though on another inspection, it's also removed the original prompter's watermark
iyk
In Case 16 (diagram of the heart), every single label (aside from the superior vena cava) is incorrect.
muzani
Yeah, I appreciate this kind of benchmarking too. That other Gen AI Showdown in the comments also does a good job with this - mentions that it was best of 8 attempts and so on.
lm28469
47 is also very questionable
48 is impossible to do in a way that is accurate and meaningful
neilv
Unfortunately NSFW in parts. It might be insensitive to circulate the top URL in most US tech workplaces. For those venues, maybe you want to pick out isolated examples instead.
(Example: Half of Case 1 is an anime/manga maid-uniform woman lifting up front of skirt, and leaning back, to expose the crotch of underwear. That's the most questionable one I noticed. It's one of the first things a visitor to the top URL sees.)
UomoNeroNero
I’m Italian, and I really struggle to rationalize this attitude. I honestly don’t understand. Maybe it’s because I’m surrounded by 2,500 years of art in which nudity is an essential and predominant element, by people (even in the workplace) who have a relaxed and genuinely democratic view of the subject — but this comment feels totally alien to me. I suppose it’s my own limitation, but I would NEVER have focused attention on this aspect. I don’t know, maybe I’m the one who’s wrong…
neilv
Italy obviously has much rich and beautiful culture, though I don't know it well enough to understand the difference on this point. Does my response to someone else clarify how and why US corporate culture may be different?
1dom
As a non-US citizen - even though I've been the only Brit in remote teams of Americans - I find this really hard to make sense of.
At least in the UK, if I saw this loaded on someone else's screen at work, I might raise an eyebrow initially, but there wouldn't be any consequences that don't first consider context. As soon as the context is provided ("it's comparing AI models, look! Cool, right?!") everyone would get on with their jobs.
What would be the consequence of you viewing this at work?
How would the situation be handled?
Is the problem a HR thing - like, would people get sacked for this? Or is it like a personal conduct/temptation, that colleagues who see it might not be able to restrain themselves or something?
tempodox
I think it’s mostly puritanical bigotry.
mensetmanusman
Understanding the complex dynamics that strengthen relationships or weaken men’s resolve for commitment may be enlightening.
neilv
I think one part of it (not all of it) is that the US has a long history of women being sexually harassed in the workplace, in various ways. It's not nearly as bad as it used to be, but it's not fully solved everywhere.
(Note: Statements suggesting that sexual harassment exists at all make some people on the Internet flip out angrily, but I interpret your questions as in good faith, and I'm trying to answer in good faith.)
One example of why that that harassment context is relevant: if you were a woman, wouldn't you think it was insensitive for a male colleague to send you an image that was obviously designed to be sexually suggestive, and with the female as the sex object? Is he consciously harassing you, or just being oblivious to why this is inappropriate?
For a separate reason that this is a problem in the workplace: besides the real impact to morale and how colleagues respect each other, even the most sociopathic US companies want to avoid sexual harassment lawsuits and public scandals.
For reasons like these, and others, if someone, say, posted that isolated maid image to workplace chat, then I think there's a good chance that a manager or HR would say something to the employee if they found out, and/or (without directly referring to that incident) communicate to everyone about appropriate practices.
But if there was a pattern of insensitive/oblivious/creepy behavior by this employee, or if someone complained to manager/HR about the incident, or if there was legal action against the company (regarding this incident, or a different sexual harassment situation), then I guess the employee might be terminated.
If I were a manager in a company, and one of my reports posted an image like this, I'd probably say something quietly to them, and much more gently than the above (e.g., "Uh, that image is a bit in a direction we want to stay away from in the office", or maybe even just the slightest concerned glance), and most people would get it. Just a little learning moment, like we all have many of. But if there were a trickier situation, or I was under orders, I might have to ask HR about it (and if I did, hopefully that particular HR person is helpful, and that particular company is reasonable).
raincole
I'm really surprised that it can generate the underwear example. Last time I tried Nano Banaba (with safety filter 'off', whatever it means), it refused to generate a 'cursed samurai helmet on an old wooden table with a bleeding dead body underneath, in cartoon style.'
Edit: It still blocks this request.
undefined
thrdbndndn
I'm more bothered by the fact that this reference image is clearly a well-made piece of digital art by some artist.
We all know the questionable nature of AI/LLM models, but people in the field usually at least try to avoid directly using other people's copyrighted material in documentation.
I'm not even talking about legality here. It just feels morally wrong to so blatantly use someone else's artwork like this.
coldfoundry
I agree that proper permission should be used for these examples, but I’m quite sure the image in question is AI generated. The quality is incredible these days as to what can be generated, and even to a trained eye it’s getting more difficult by the day to tell if its AI or not.
Source of artist: https://x.com/curry3_aiart/status/1947416300822638839
raincole
The reference is AI-generated too. This comment shows how people are susceptible to our existing bias.
kouteiheika
My favorite (or should I say, anti-favorite?) is calling real artists' art AI, which I'm starting to see more and more of, and I've already seen a couple of artists rage-quit social media because of the anti-AI crowd's abuse.
istjohn
Personally, I'm underwhelmed by this model. I feel like these examples are cherry-picked. Here are some fails I've had:
- Given a face shot in direct sunlight with severe shadows, it would not remove the shadows
- Given an old black and white photo, it would not render the image in vibrant color as if taken with a modern DSLR camera. It will colorize the photo, but only with washed out, tinted colors
- When trying to reproduce the 3 x 3 grid of hair styles, it repeatedly created a 2x3 grid. Finally, it made a 3x3 grid, but one of the nine models was black instead of caucasian.
- It is unable to integrate real images into fabricated imagery. For example, when given an image of a tutu and asked to create an image of a dolphin flying over clouds wearing the tutu, the result looks like a crude photoshop snip and copy/paste job.
strange_quark
I thought the the 3rd example of the AR building highlighting was cool. I used the same prompt and seems to work when you ask it for the most prominent building in a skyline, but fails really hard if you ask it for another building.
I uploaded an image I found of Midtown Manhattan and tried various times to get it to highlight the Chrysler Building, it claimed it wasn't in the image (it was). I asked it to do 432 Park Ave, and it literally inserted a random building in the middle of the image that was not 432 Park, and gave me some garbled text for the description. I then tried Chicago as pictured from museum campus and asked it to highlight 2 Prudential, and it inserted the Hancock Center, which was not visible in the image I uploaded, and while the text was not garbled, was incorrect.
autoexec
Even these examples aren't perfect.
The "Photos of Yourself in Different Eras" one said "Don't change the character's face" but the face was totally changed. "Case 21: OOTD Outfit" used the wrong camera. "Virtual Makeup Try-On" messed up the make up. "Lighting Control" messed up the lighting, the joker minifig is literally just SH0133 (https://www.bricklink.com/catalogItemInv.asp?M=sh0133), "Design a Chess Set" says you don't need an input image, but the prompt said to base it off of a picture that wasn't included and the output is pretty questionable (WTF is with those pawns!), etc.
I mean, it's still pretty neat, and could be useful for people without access to photoshop or to get someone started on a project to finish up by hand.
foofoo12
> I feel like these examples are cherry-picked
I don't know of a demo, image, film, project or whatever where the showoff pieces are not cherry picked.
huflungdung
[dead]
darkamaul
This is amazing. Not that long ago, even getting a model to reliably output the same character multiple times was a real challenge. Now we’re seeing this level of composition and consistency. The pace of progress in generative models is wild.
Huge thanks to the author (and the many contributors) as well for gathering so many examples; it’s incredibly useful to see them to better understand the possibilities of the tool.
mitthrowaway2
I've come to realize that I liked believing that there was something special about the human mental ability to use our mind's eye and visual imagination to picture something, such as how we would look with a different hairstyle. It's uncomfortable seeing that skill reproduced by machinery at the same level as my own imagination, or even better. It makes me feel like my ability to use my imagination is no more remarkable than my ability to hold a coat off the ground like a coat hook would.
al_borland
As someone who can’t visualize things like this in my head, and can only think about them intellectually, your own imagination is still special. When I heard people can do that, it sounded like a super power.
AI is like Batman, useless without his money and utility belt. Your own abilities are more like Superman, part of who you are and always with you, ready for use.
HeartStrings
Look everybody, this mfa can’t rotate an Apple in his head
lemonberry
But you can find joy at things you envision, or laugh, or be horrified. The mental ability is surely impressive, but having a reason to do it and feeling something at the result is special.
"To see a world in a grain of sand And a heaven in a wild flower..."
We - humans - have reasons to be. We get to look at a sunset and think about the scattering of light and different frequencies and how it causes the different colors. But we can also just enjoy the beauty of it.
For me, every moment is magical when I take the time to let it be so. Heck, for there to even be a me responding to a you and all of the things that had to happen for Hacker News to be here. It's pretty incredible. To me anyway.
FuckButtons
I have aphantasia, I’m glad we’re all on a level playing field now.
yoz-y
I always thought I had a vivid imagination. But then the aphantasia was mentioned in Hello Internet once, I looked it up, see comments like these and honestly…
I’ve no idea how to even check. According to various tests I believe I have aphantasia. But mostly I’ve got not even a slightest idea on how not having it is supposed to work. I guess this is one of those mysteries when a missing sense cannot be described in any manner.
jmcphers
A simple test for aphantasia that I gave my kids when they asked about it is to picture an apple with three blue dots on it. Once you have it, describe where the dots are on the apple.
Without aphantasia, it should be easy to "see" where the dots are since your mind has placed them on the apple somewhere already. Maybe they're in a line, or arranged in a triangle, across the middle or at the top.
foofoo12
Ask people to visualize a thing. Pick something like a house, dog, tree, etc. Then ask about details. Where is the dog?
I have aphantasia and my dog isn't anywhere. It's just a dog, you didn't ask me to visualize anything else.
When you ask about details, like color, tail length, eyes then I have to make them up on the spot. I can do that very quickly but I don't "see" the good boy.
Revisional_Sin
Aphantasia gang!
m3kw9
To be fair, the model's ability came from us generating the training data.
quantummagic
To be fair, we're the beneficiaries of nature generating the data we trained on ourselves. Our ability came from being exposed to training in school, and in the world, and from examples from all of human history. Ie. if you locked a child in a dark room for their entire lives, and gave them no education or social interaction, they wouldn't have a very impressive imagination or artistic ability either.
We're reliant on training data too.
lawlessone
Gonna try use this one instead of paying the next time i visit a restaurant.
layer8
The proof in the pudding will be if machines will be able to develop new art styles. For example, there is a progression in comic/manga/anime art styles over the decades. If humans would stop (they probably won't) that kind of progression, would machines be able to continue it? In principle yes (we are biological machines of sorts), but likely not with the current AI architecture.
krapp
I think it's a mistake to look at developing new art styles as simply continuing a linear progression. More often than not art styles are unique to the artist - you couldn't, for instance, put Eichiro Oda, Tsutomu Nihei and Rumiko Takahashi on the same number line. And trends tend to develop in reaction to existing trends, usually started by a single artist, as often as they do as an evolution of a norm.
Arguably, if creating an art style is simply a matter of novel mechanics and uniqueness, LLMs could already do that simply by adding artists to the prompts ("X" in the style of "A" and "B") and plenty of people did (and do) argue that this is no different than what human artists do (I would disagree.) I personally want to argue that intentionally matters more than raw technique, but Hacker News would require a strict proof for the definition of intentionality that they would argue humans don't possess, but somehow LLMs do, and that of course I can't provide.
I guess I have no argument besides "it means more to me that a person does it than a machine." It matters to me that a human artist cares. A machine doesn't care. And yes, in a strictly materialist sense we are nothing but black boxes of neurons receiving stimuli and there is no fundamental difference between a green field and a cold steel rail, it's all just math and meat, but I still don't care if a machine makes X in the style of (Jack Kirby AND Frank Miller.)
autoexec
> More often than not art styles are unique to the artist
I'd disagree. Art styles are a category of many similar works in relation to others or a way of bringing about similar works. They usually build off of or are influenced by prior work and previous methods, even in cases where there is a effort to avoid or subvert them. Even with novel techniques or new mediums. "Great Artists Steal" and all that.
Some people become known for certain mediums or the inclusion of specific elements, but few of them were the first or only artists to use them. "Art in the style of X" just comes down to familiarity/marketing. Art develops the way food does with fads, standards, cycles, and with technology and circumstance enabling new things. I think evolution is a pretty good analogy although it's driven by a certain amount of creativity, personal preference, and intent in addition to randomness and natural selection.
Computers could output random noise and in the process eventually end up creating an art style, but it'd take a human to recognize anything valuable and artists to incorporate it into other works. Right now what passes for AI is just remixing existing art created by humans which makes it more likely to blindly stumble into creating some output we like, but inspiration can come from anywhere. I wouldn't be surprised if the "AI Slop" art style wasn't already inspiring human artists. Maybe there are already painters out there doing portraits of people with the wrong number of fingers. As AI is increasingly consuming it's own slop things could get weird enough to inspire new styles, or alternately homogenized into nothing but blandness.
micromacrofoot
it can only do this because it's been trained on millions of human works
jryle70
And those millions people learned their craft by studying those who came before them.
echelon
This argument that hints at appropriation isn't going to be very useful or true, going forward.
There are now dozens of copyright safe image and video models: Adobe, MoonValley, etc.
We technically never need human works again. We can generate everything synthetically (unreal engine, cameras on a turn table, etc.)
The physics of optics is just incredibly easy to evolve.
lawlessone
>We technically never need human works again.
Not sure about that. Humans are doing almost all the work now still.
lm28469
You can also drink your own piss and eat your own shit for a while and stay alive, I don't think you'll get better with time if that's all you ingest
micromacrofoot
Complete nonsense, if this were to follow every AI company would stop training because it's so expensive... but they won't.
echelon
Vision has evolved frequently and quickly in the animal kingdom.
Conscious intelligence has not.
As another argument, we've had mathematical descriptions of optics, drawing algorithms, fixed function pipeline, ray tracing, and so much more rich math for drawing and animating.
Smart, thinking machines? We haven't the faintest idea.
Progress on Generative Images >> LLMs
Animats
> Vision has evolved frequently and quickly in the animal kingdom. Conscious intelligence has not.
Three times, something like intelligence has evolved - in mammals, octopuses, and corvids. Completely different neural architectures in those unrelated speces.
nick__m
Why carve out the corvid from the other birds ? Some parots and parakeets species are playing in the same league as the corvids.
echelon
I won't judge our distant relatives, the cephalopods and chicken theropods, but we big apes are pretty dumb.
Even with what we've got, it took us hundreds of thousands of years to invent indoor plumbing.
Vision, I still submit, is much simpler than "intelligence". It's evolved independently almost a hundred times.
It's also hypothesized that it takes as few as a hundred thousand years to evolve advanced eye optics:
https://royalsocietypublishing.org/doi/10.1098/rspb.1994.004...
Even plants can sense the visual and physical world. Three dimensional spatial relationships and paths and rays through them are not hard.
EGreg
Seriously? One could always cut-and-paste (not the computer term) a hairstyle over a photo of a person.
You are now marvelling at someone taking the collective output of humans around the world, then training a model on it with massive, massive compute… and then having a single human compete with that model.
Without the human output on the Internet, none of this would be possible. ImageNet was positively small compared to this.
But yeah, what you call “imagination” is basically perturbations and exploration across a model that you have in your head, which imposes constraints (eg gravity etc) that you learned. Obviously we can remix things now that they’re on the Internet.
Having said that, after all that compute, the models had trouble rendering clocks that show an arbitrary time, or a glass of wine filled to the brim.
cma
>Having said that, after all that compute, the models had trouble rendering clocks that show an arbitrary time, or a glass of wine filled to the brim.
I know you're probably talking about analog clocks, but people when dreaming have trouble representing stable digits on clocks. It's one of the methods to tell if you are dreaming.
dbish
Nano banana is great. Been using it for creating coloring books based off photos for my son and friends’ kids: https://github.com/dbish/bespoke-books-ai-example
Does a pretty good job (most of the time) of sticking to the black and white coloring book style while still bringing in enough detail to recognize the original photo in the output.
foobarbecue
Man, I hate this. It all looks so good, and it's all so incorrect. Take the heart diagram, for example. Lots of words that sort of sound cardiac but aren't ("ventricar," "mittic"), and some labels that ARE cardiac, but are in the wrong place. The scenes generated from topo maps look convincing, but they don't actually follow the topography correctly. I'm not looking forward to when search and rescue people start using this and plan routes that go off cliffs. Most people I know are too gullible to understand that this is a bullshit generator. This stuff is lethal and I'm very worried it will accelerate the rate at which the populace is getting stupider.
zahlman
> Most people I know are too gullible to understand that this is a bullshit generator.
I'm more worried about the cases that aren't trying to be info diagrams. There's all this "safety" discourse around not letting people generate NSFW, and around image copyrights etc. but nobody talks about the potential to use things like #11 for fraud. "Disinformation" always gets approached from a political angle instead of one of personal gain.
rimmontrieu
Impressive examples but for GenAI it always comes down to the fact that you have to cherry pick the best result after so many fail attempts. Right now, it feels like they're pushing the narrative that ExpectedOutput = LLM(Prompt, Input) when it's actually ExpectedOutput = LLM(Prompt, Input) * Takes where Takes can vary from 1 to 100 or more
antiraza
Why is that a bad thing, or even a non-expected thing? If you pick up a paintbrush, you don't always nail each stroke on the canvas -- just because it's programmatic doesn't mean it should be like a calculator.
LLMs and image generators are cross pollinating human language and human visual information -- both really fuzzy mediums.
I think learning how to 'use this instrument' and 'finding the perfect brush stroke' are part of how they are supposed to work (at least in their current form). I also don't know that just because they are showing good outputs from the inputs that this is framing the narrative as one-and-done... I think the rest of the owl is kind in of implied.
raincole
ML researchers have been used Top-5 accuracy for a quite long time, especially when it comes to computer vision.
Of course it's a ridiculous index in most use cases (like in self-driving car. Your 4th guess is that you need to brake? Cool...). But somehow people in ML normalized it.
vunderba
That's why I always record the number of rolls it takes to get to an acceptable result on my GenAI Comparison site for each model - it's a broad metric indicating how much you have to fight to steer the model in the right direction.
Get the top HN stories in your inbox every day.
Nano-Banana can produce some astonishing results. I maintain a comparison website for state-of-the-art image models with a very high focus on adherence across a wide variety of text-to-image prompts.
I recently finished putting together an Editing Comparison Showdown counterpart where the focus is still adherence but testing the ability to make localized edits of existing images using pure text prompts. It's currently comparing 6 multimodal models including Nano-Banana, Kontext Max, Qwen 20b, etc.
https://genai-showdown.specr.net/image-editing
Gemini Flash 2.5 leads with a score of 7 out of 12, but Kontext comes in at 5 out of 12 which is especially surprising considering you can run the Dev model of it locally.