What is the order of magnitude of the largest document store that you can practically work from SQLite on a single thousand-dollar server run by some text-heavy business process? For text search, roughly how big of a corpus can we practically search if we're occupying... let's say five seconds per query, twelve queries per minute?
If you held a gun to my head and forced me to make a guess I'd say you could push that approach to order of 100K, maybe 1M documents.
If sqlite had a generic "strictly ascending sequence of integers" type[1] and would optimize around that, you could probably push it farther in terms of implementing efficient inverted indexes.
From my experience, SQLite's FTS5 is orders of magnitude more performant than that, i.e. for 100K documents, 7 queries/second on some of the cheapest 1 vCPU Virtual Machines.
But it is true that a specialized search engine using a more clever algorithm might be another order of magnitude faster.