One important aspect of the success of AlphaGo and its successor is the game environment is closed domain, and has a stable reward function. With this we can guide the agent to do MCTS search and planning for the best move in every state.
However, such reward system is not available for LLM in an open domain setting.
The company I work for try to add AI/LLM on everything, instead of trying to improve/fix the underlying problem, they now just add the magic AI and everything is “perfect” now.
As an ML engineer and AI developer, I don’t see the real value at all, not to mention the added cost of using LLM
Going down the tangent of people working in the industry...
I unwittingly fell into low-level coding for DL software stacks about 7 years ago.
At first I was merely uninterested in the topic, compared to my teammates.
Now I think there's a serious possibility that LLMs and other new DL capabilities will be a net negative for society. I'm actively trying to get other work.
I know that if I don't do the work others gladly will, but the status quo seers my conscience.
I looked at the supposed “research” article, there’s nothing to read except few charts to show off the “improvements” over current models. No discussion of the training method or dataset whatsoever.
If I remember correctly, the last decent research paper the company published was probably the InstructGPT paper.
Anyone know how to use GraphRAG to build the knowledge graph on a large collection of private documents, where some might have complex structure (tables, links to other docs), and the content or terms in one document could be related to other documents as well?
This is exactly why we're working on the GraphRAG-SDK to ease the process.
You might want to check out https://github.com/FalkorDB/GraphRAG-SDK/ and we would love to hear your feedback