> How does it work?
We don't know! We built this to learn a little bit more. We've seen that LLMs tend to prefer user-generated content (sites such as wikipedia, reddit, etc.) and strangely even youtube.
> How do marketers rank higher? Will LLMs prioritize other LLM content?
At least so far, LLMs and search engines tend to downrank LLM created content. I could see this becoming indistinguishable in the future and/or LLMs surpassing humans in terms of effectively generating what reads as "original content"
> Who will pull the strings?
At this point, it seems like whoever owns the models. Maybe we'll see ads in AI search soon.
We absolutely do lose information here; that's a great point. The goal for us wasn't necessarily to surface the best ranking; it was to learn how LLMs produce a given ranking and what sources it pulls in.
The nugget of real interest here (personally speaking) is in those citations: what is the new meta for products getting ranked/referred by LLMs?
I get the output from the LLMs, compile into a report, and then pass it back through an LLM to sense check the result with the added context of what's been requested in the report, but I'm not super happy with the outcome still, some different categories still come out a bit of a mess.
OP here - looking at what the models pick up as sources for "Trustworthy News Sources" is especially interesting. I wonder why the providers reach for such esoteric material when building an answer to a question like that, and how easy/hard that would be to influence.
That's a great point - we built this moreso to learn a bit about how the AI models interpret ranking products, and less so to actually be a trusted source of recommendations. Seeing the citations come through has been really fascinating.
The use case for that is to better understand where the gaps are when looking to capture this new source of inbound, given people are using AI to replace search.
There's definitely a whole bunch of features missing that we'd need to make this a genuinely useful product recommendation engine! Price constraints, better de-duping, linking out to sources to show availability, etc.
Thanks! You can think of Grimp as a lower-level tool for interacting with the import graph in Python, while Tach is a high-level tool responsible for 'modularity' as a whole (e.g. modules, interfaces, layers, deprecations etc.)
Tach is also more opinionated - so it doesn't require you to write any custom code, and uses declarative config to enforce your desired architecture.
> How do marketers rank higher? Will LLMs prioritize other LLM content? At least so far, LLMs and search engines tend to downrank LLM created content. I could see this becoming indistinguishable in the future and/or LLMs surpassing humans in terms of effectively generating what reads as "original content"
> Who will pull the strings? At this point, it seems like whoever owns the models. Maybe we'll see ads in AI search soon.
https://www.tryprofound.com/_next/static/media/honeymoon-des...