Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great starting point! These diagrams notably miss a LLM firewall layer, which is critical in practice to safe LLM adoption. Source: We work with thousands of users for logicloop.com/ai


What do you mean by firewall layer? What tools do you use here?


These common issues tend to prevent LLMs from being used in the wild: * Data Leakage * Hallucination * Prompt Injection * Toxicity

So yes it does include prompt injection, but is a bit broader. Data Leakage is one that several customers have called out, aka accidentally leakage PII to underlying models when asking them questions about your data.

I'm evaluating tools like Private AI, Arthur AI etc. but they're all fairly nascent.


I’m a researcher in the space exploring few ideas with the intention of starting up. Would love to reach out to you and talk to you. Is there a way I can contact you?

My email is beady.chap-0f@icloud.com


I imagine he's talking about preventing prompt injection (or making shit up)


Yup, that's part of it but I mean it bidirectionally - users can accidentally leak data to models too, which is concerning to SecOps teams without a way to monitor / auto-redact.


That doesn't seem like the type of problem that can be solved with a drop-in solution.


I think we can detect atleast a few things like PII leaks etc. Don't you think those things alone are valuable?


No but that won't stop them from making a startup to sell you some snake oil that doesn't work!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: