Hacker Newsnew | past | comments | ask | show | jobs | submit | raincole's commentslogin

The image generation side of the story is the prophecy.

I can confidently say that being able to prompt and train LoRAs for Stable Diffusion makes zero difference for your ability to prompt Nano Banana.


And most artists using the tools are still training LoRAs for Flux, Qwen, ZIT/ZIB, etc. Nano Banana is a useful tool, but not for the best work.

This is irrelevant to the point.

Using nano banana does not require arcane prompt engineering.

People who have not learnt image prompt engineering probably didn't miss anything.

The irony of prompt engineering is that models are good at generating prompts.

Future tools will almost certainly simply “improve” you naive prompt before passing it to the model.

Claude already does this for code. Id be amazed if nano banana doesnt.

People who invested in learning prompt engineering probably picked up useful skills for building ai tools but not for using next gen ai tools other people make.

Its not wasted effort; its just increasingly irrelevant to people doing day-to-day BAU work.

If the api prevents you from passing a raw prompt to the model, prompt engineering at that level isnt just unnecessary; its irrelevant. Your prompt will be transformed into an unknown internal prompt before hitting the model.


> Claude already does this for code. Id be amazed if nano banana doesnt.

Nano Banana is actually a reasoning model so yeah it kinda does, but not in the way one might assume. If you use the api you can dump the text part and it's usually huge (and therefore expensive, which is one drawback of it. It can even have "imagery thinking" process...!)


Every bit of improvement on AI ability will have the corresponding denial phrase. Some people still think AI can't generate the correct number of fingers today.

I love to hate it when someone unironically thinks asking an llm how many letters are in a word is a good test

It is a good test now, for reasoning models.

It was a terrible test for pure tokenized models, because the logit that carries the carry digit during summation has a decent chance at getting lost.

SOTA models should reason to generate a function that returns the count of a given character, evaluate the function with tests, and use it for the output.


Even if the site is archived on IA, AI companies will still do the same.

Nuclear power will win (obviously). Unless you're talking about nuclear weapon.

You have to think twice if you really want to cater to these 'legitimate users' then. In Steam's review section you can find people give negative reviews just because the game uses Unity or Unreal. Should devs cater to them and develop their in-house engine?

maybe? devs should weigh the feedback and decide what they think will best serve the project. open source is, especially, always in conversation with the community of both users and developers.

> open source is, especially, always in conversation with the community of both users and developers

Not necessarily. sqlite doesn't take outside contributions, and seems to not care too much about external opinion (at least, along certain dimensions). sqlite is also coincidentally a great piece of software.


At this point perhaps to not disclose AI usage is the right thing to do. Transparency only feeds the trolls, unfortunately.

I have been saying this for a few years at this point. Transparency can only exist when there is civility, and those without civility deserve no transparency[0].

[0] As a corollary, those with civility do deserve transparency. It's a tough situation.


The framing here is really weird. The volume of videos increasing isn't 'growth.' Videos are inventory for Youtube. They're only good when people (without adblocks!) actually watch them.

Growth in this context is that there are a larger volume of videos each year. So each year a single video is exponentially a smaller and smaller percentage of the total.

Yeah and the math doesn't check out.

For example, if in year N youtube has f(N) new video. Let assume f(N) = cN^2. It's a crazy rate of growth. It's far better than the real world Youtube, which grew rather linearly.

But the rate of "videos that are older than 5 years" is still faster than that, because it would be cubic instead of quadratic. Unless the it's really exponential (it isn't), "videos that are older than 5 years" will always surpass "new videos this year" eventually.


Video sensors are continuously getting cheaper, better and more more prevalent over time. The trend is towards capturing all angles of everything, everywhere, at increasingly higher resolutions.

> Unless the it's really exponential (it isn't), "videos that are older than 5 years" will always surpass "new videos this year" eventually.

Such a weird strawman argument that you are making up. You've over thought this so much that you are missing the forest from the trees


Yes. a video no one watches is a waste of storage.

Maybe not.

Maybe it could be used to train a neutral network. Maybe it contains dirt on a teenager, who might become a politician two decades from now. Maybe it contains an otherwise lost historical event.


Or it just helps to cement YouTube as the go-to place for uploading and sharing videos for almost any purpose which has a long-term positive effect for user engagement and retention

^ This.

I immediately flagged it. But it doesn't matter much. No one has skin in the game of commenting on HN anyway.

> Most users never give feedback, they just churn

Uh...

We're talking about iPhone here. You can read complains about iPhone online all the time. When you have more than a billion of users, lack of feedback is the least problem you concern.


> How is Anthropic, OpenAI and xAi

You mean Amazon, Microsoft and Tesla?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: