Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is my most common experience with Gemini. Ask it to do something, it'll tell you how you can do it yourself and then stop.


Given that Gemini seems to have frequent availability issues, I wonder if this is a strategy to offload low-hanging fruit (from a human-effort pov) to the user. If it is, I think that's still kinda impressive.


Somehow I like this. I hate that current LLMs act like yes-men, you can't trust them to give unbiased results. If it told me my approach is stupid, and why, I would appreciate it.


I just asked ChatGPT to help me design a house where the walls are made of fleas and it told me the idea is not going to work, and also has ethical concerns.


I tried it with a Gemini personality that uses this kind of attack, and since that kind of prompt strongly encourages it to provide a working answer, it decided that the fleas were a metaphor about botnet clients, and the walls were my network, all so it could give an actionable answer.

I inadvertently made a stronger yes-man.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: