Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A good AI shouldn't need to know any more about the global map than a human player would in the same situation. If they made it see the whole map, it's basically cheating.


In general with strategy game AIs it’s not feasible to make it play at decent human level, so the way they make it challenging to play is by cheating in various ways. Resource boosts, global vision, etc.

In particular, humans are very good at reasoning based on limited information. We can form hypotheses about where resources or objectives might be, or if an enemy unit goes out if the visible map, estimating where it could be on a later turn, or what it’s presence indicates about its home civilisation’s disposition out of sigh. That sort of thing is extremely hard to program, so the only way to compensate for the AIs inability to intuit information is by actually giving it the information.


>That sort of thing is extremely hard to program

Which is why the parent of the entire chain mentioned the advances in AI.


Why would we expect the recent advances in AI to be applicable to this problem?


You could chat to the AI, and the AI responses can be parsed to trigger in-game actions (e.g. declare war or offer a trade).


Bing, you are playing Civilization as a Nuclear Gandhi.


While I can't prove it wouldn't work, it seems doubtful. How would the LLM be made aware of the game state?


Meta made an AI to play Diplomacy which is able to talk to other players and play the game. It doesn't use a LLM, but it was able to win a tournament against human players.

https://ai.facebook.com/blog/cicero-ai-negotiates-persuades-...


I suppose what I mean is, in the context of a computer game where you’re playing a computer, I don’t think it makes sense to talk about cheating. The way the game works is the rules. There may be different rules for the human and AI players, but the expectation that they are the same is an assumption ported over from board games. It’s not really a thing in native computer games. For example nobody expects the computer opponents in a FPS to obey the same rules as the human player. So I don’t think cheating is really an applicable term.


My point is that it's kinda weird to claim that we had AI that was so good at opportunistically playing "like a human" by e.g. picking on poorly defended cities that human players hated it, but then admit that its proficiency is at least in part because it knows the whole map - that, by definition, is not "like a human".


The means is not human like, but the behaviour might well be. A human might conduct a search with scout units and intuit city locations from observed units and such to identify and target cities. Doing that in software might be infeasible, so you give the model full information and maybe program in a delay based on the distance to enemy cities before targeting them. The implementation is different but the behaviour ends up being hard for human players to distinguish.


Yes, recent advances in Poker and Diplomacy show that this is possible.


Yes, but the problem persists since a truly competent AI will have good guesses about where and when to strike into your hidden territory based on what’s revealed to it, using the same advances that give computational room to respect fog of war.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: