I see the value here. The problem isn't just the search; it's the trust. The biggest hurdle for Zenode won't be the tech, but convincing an engineer that your AI's summary of a footnote is accurate enough to risk a $10,000 board spin. That's a high bar.
I'd argue the core value isn't just a better search or a faster reader. It's about providing a verified, reliable source of truth. This brings up a key tension: you say the AI isn't yet at your co-founder's level of accuracy, but is that precisely the level of confidence required to replace an engineer's manual check? How do you close that gap? You've got the data, but the trust factor is a different threshold?
I.e. maybe you've built the tool to make the problem faster, but the real win would be a tool that makes the problem safer? The killer feature might not be more speed, but rather a confidence score on every AI-generated fact, with a clear path to the source document so an engineer can verify it. It’s not about avoiding the document entirely; it’s about having a better starting point and knowing exactly what to double-check.
Agreed - trust is the key! That's why we've built in sources with links to the exact location in the datasheet and part documents where the AI found an answer. We're working hard to make sure you can trust its answers, but we know most engineers 'trust but verify'. A (transparent) confidence score is a great idea to improve trust in the answer and sources.
To close the gap, we've built our own Q/A datasets and are training custom AIs how to search and read a datasheet (like a new engineer needs to learn early on). We're concentrating on teaching the AI how to identify key information vs noise as it relates to electrical engineering (differences like 'Voltage' in the Absolute Max vs Recommended section) and where information is likely to be found in a datasheet or app note.
I'd argue the core value isn't just a better search or a faster reader. It's about providing a verified, reliable source of truth. This brings up a key tension: you say the AI isn't yet at your co-founder's level of accuracy, but is that precisely the level of confidence required to replace an engineer's manual check? How do you close that gap? You've got the data, but the trust factor is a different threshold?
I.e. maybe you've built the tool to make the problem faster, but the real win would be a tool that makes the problem safer? The killer feature might not be more speed, but rather a confidence score on every AI-generated fact, with a clear path to the source document so an engineer can verify it. It’s not about avoiding the document entirely; it’s about having a better starting point and knowing exactly what to double-check.