Sorry to be the typical pessimistic HN commenter (e.g., Dropbox is just ftp), but this seems ambitious enough to remind me of https://en.wikipedia.org/wiki/Cyc.
Even Wikidata today is already a lot more usable and scalable than Cyc. The latter always seemed like a largely-pointless proof of concept; Wikidata by contrast is very clearly something that can contain real info, and be queried in useful ways. (Of course knowledge is not always consistently represented, but that issue is inherent to any general-purpose knowledge base - and Wikidata does at least try to address it, if only via leveraging the well-known principle "many eyes make all bugs shallow".)
It is well-known wikidata does not scale. Whether it is in terms of number of data contribution or number of queries. Not only that, but the current infrastructure is... not great. WBStack [0] try to tackle that but it is still much more difficult to enter the party, than it could be. Changes API? None. That means that it is not possible to keep track of changes in your own wikidata/wikibase instance improved with some domain specific knowledge. Change-request mechanic? Not even in the roadmap. Neither is it possible to query for history of changes over the triples.
Wikidata GUI can be attractive and easy to use. Still, there is big gap between the GUI and the actual RDF dump, that is, making sense of the RDF dump is big endeavor. Who else wants to remember properties by number? It might be a problem of tooling. Question: how to add a new type of object to the GUI? PHP? Sorry.
> Neither is it possible to query for history of changes over the triples.
And why should it? The triples (and hence the full RDF dump as well) are a “lossy” (there's actually two different translations, the “truthy” triples that throw away large parts of the data, and the full dump that reifies the full statements, but is therefore much more verbose) translation of the actual information encoded in the graph. Revision history for the _actual_ items has been queryable via the Mediawiki API for a long time.
Agreed. "[since 1982,] by 2017 [Lenat] and his team had spent about 2,000 person-years building Cyc, approximately 24 million rules and assertions (not counting "facts") and 2,000 person-years of effort." https://en.wikipedia.org/wiki/Douglas_Lenat
Because Cyc is not seen as having been successful, so comparing a new project to it implies that Abstract Wikipedia won't be successful either. And, of course, all new approaches in each discipline fail, until sometimes they start succeeding.
Cyc got hyped for while in the early 90s. It became apparent, however, that rule-based wasn't going to play as big a role as ML in the future of AI research. It still exists, but the company is really secretive, and hasn't released anything viable in years.
[edit: I wasn't alive back then, so most of what I know comes from the Wikipedia article and a recent HN thread: https://news.ycombinator.com/item?id=21781597 . My view of Cyc probably comes across as slightly negative. Their (Cycorp) view seems to have evolved since then, and they seem to be creating some really interesting stuff.]
I'm glad to see you're pulling from the CNCF landscape, where we have 1,400 individual curated SVGs. There are another few thousand across the other Linux Foundation landscapes referenced at https://landscapes.dev/.
While collecting over a thousand SVGs for https://landscape.cncf.io, I found that the SVGs often needed cropping and other optimization. So (with a colleague) I wrote and open sourced svg-autocrop, and made it available at https://autocrop.cncf.io.