Hacker Newsnew | past | comments | ask | show | jobs | submit | deshraj's commentslogin

Congratulations, Emir and Waleed! This is exactly the kind of OSS tooling I’ve been waiting for. I’ve spent countless hours wrestling with multi-step agent workflows hidden inside monolithic prompts, and every iteration felt like shooting in the dark. Having a drag-and-drop, executable graph with built-in branching, loops, and observability is a game-changer.


thank you! check out the mem0 integration and let us know if you like the form factor. excited for you to check out the platform and let us know if it helps you wrangle those multi-step agent workflows.


100% agree. I have seen similar issues related to both quality and performance of ChatGPT Memory feature.

Shameless plug: We have been working on this problem at Mem0 to solve the long-term memory problem with LLMs. GitHub: https://github.com/mem0ai/mem0


Yes, you can run Mem0 locally since we have open sourced it but would need some more work to have a server up and running to be able to interact with Claude. GitHub: https://github.com/mem0ai/mem0


I think you misunderstood what the parent commenter meant. I believe they were talking about running the AI locally, like with llamacpp or koboldcpp or vllm.

I checked your documentation and the only way I can find to run mem0 is with a hosted model. You can use the OpenAI API, which many local backends can support, but I don't see a way to point it at localhost. You would need to use an intermediary service to intercept OpenAI API calls and reroute them to a local backend unless I am missing something.


Ah I see. We do support running Mem0 locally with Ollama. You can checkout our docs here: https://docs.mem0.ai/examples/mem0-with-ollama


It only support Chrome for now. I built this in few hours quickly to solve my problem. Happy to accept contributions to the repository if someone builds it.


Thanks, I misunderstood. I thought it was a commercial product. Thanks for your effort.


You were right, "Built using ...". It's a commercial project. Must be hard to lift such things off the ground.


Thanks for the question. Here's how Mem0 differs from ChatGPT memory:

1. LLM Compatibility: Mem0 works with various AI providers (OpenAI, Anthropic, Groq, etc.), while ChatGPT memory is tied to OpenAI's models only.

2. Target Audience: Mem0 is built for developers creating AI applications, whereas ChatGPT memory is for ChatGPT users.

3. Quality and Performance: Our evaluations show Mem0 outperforms ChatGPT memory in several areas:

    - Consistency: Mem0 updates memories more reliably across multiple instances.

    - Reliability: ChatGPT memory can be inconsistent with the same prompts, while Mem0 aims for more consistent performance.

    - Speed: Mem0 typically creates memories in about 2 seconds, compared to ChatGPT's 30-40 seconds to reflect new memories.
4. Flexibility: Mem0 offers more customization options for developers, allowing better integration into various AI applications.

These differences make Mem0 a better choice for developers building AI apps that need efficient memory capabilities.


We already support the feature of inclusion and exclusion of memories where the developer can control what things to remember vs not remember for their AI app/agent. For example, you can specify something like this:

- Inclusion prompt: User's travel preferences and food choices - Exclusion prompt: Credit card details, passport number, SSN etc.

Although we definitely think that there is scope to make it better and we are actively working on it. Please let us know if you have feedback/suggestions. Thanks!


An exclusion... prompt? Do you just rely on the LLM to follow instructions perfectly?


Thanks yding! Definitely agree with the feedback here. We have seen similar things when talking to developers where they want:

- Control over what to remember/forget - Ability to set how detailed memories should be (some want more detailed vs less detailed) - Different structure of the memories based on the use case


Thanks for the feedback! Yes, we are definitely planning to add support for other graph datastores including Memgraph and others.


Does the structure of data & query patterns required demand a graph store for acceptable performance? Would a Postgres-based triplestore & recursive CTE’s suck badly?


Yes, it won't scale well. I have used postgres exactly the way you specified in my past job and it didn't scale well after a certain point.


Hey, Deshraj from Mem0 team. Right now you cannot change the “user” you are chatting as yet but we can definitely make it happen. Will ship this update later today. :)


Really useful. It's amazing to see the list of apps that you guys have on the website!!!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: