Great starting point! These diagrams notably miss a LLM firewall layer, which is critical in practice to safe LLM adoption. Source: We work with thousands of users for logicloop.com/ai
These common issues tend to prevent LLMs from being used in the wild:
* Data Leakage
* Hallucination
* Prompt Injection
* Toxicity
So yes it does include prompt injection, but is a bit broader. Data Leakage is one that several customers have called out, aka accidentally leakage PII to underlying models when asking them questions about your data.
I'm evaluating tools like Private AI, Arthur AI etc. but they're all fairly nascent.
I’m a researcher in the space exploring few ideas with the intention of starting up. Would love to reach out to you and talk to you. Is there a way I can contact you?
Yup, that's part of it but I mean it bidirectionally - users can accidentally leak data to models too, which is concerning to SecOps teams without a way to monitor / auto-redact.