Author of SymSpell here. Congrats on the launch of Lexiathan.
Unfortunately, the comparison of Lexiathan vs. Symspell on your website regarding accuracy is misleading.
1. SymSpell has two parameters to control the maximum edit distance. Once you set both to 3, then also terms with an edit distance of 3 are accurately corrected:
SymSpell accurately corrects all of your examples if used properly with the correct parameters and dictionary.
Apart from that, your methodology of comparing correction accuracy by cherry-picking specific terms without statistical significance, where your product seemingly performs better, is questionable.
One would use large public corpora to measure the percentage of accurately corrected terms as well as the percentage of false positives.
Because SymSpell is Open-Source, everyone can integrate it into their applications for free, modify the code, use different dictionaries in various languages, or add terms to existing ones.
I reported the results faithfully and I believe these results reflect the performance that users would typically see running SymSpell in the browser, using the default configuration. If I had increased the edit distance, then the latency performance gap between Lexiathan and SymSpell would have been even larger, and then arguably I would have been gaming my metrics by not using SymSpell as it is configured.
Regarding dictionary size: The dictionary size (as you can verify from the gist) was 82k words. I didn't specify the dictionary size I used for Lexiathan, but it was 106k words.
Lastly, three of the words in the benchmark have edit distances greater than three:
That's the reason why the default maximum edit distance of SymSpell is 2.
Now, all your 6 out of 6 examples are chosen from that 1.1% margin that is not covered by edit distance 2, presenting a rather unlikely high amount of errors within a single word.
The third-party SymSpell port from Justin Willaby, which you were using for benchmarking, clearly states that you need to set both maxEditDistance and dictionaryEditDistance to a higher number if you want to correct higher edit distances. That you neither used nor mentioned. This has nothing to do with accuracy; it is a choice regarding a performance vs. maximum edit distance tradeoff one can make according to the use case at hand.
The examples that I chose for my benchmark demonstrate that Lexiathan maintains accuracy and performance even on severely degraded input. On less corrupted input, Lexiathan runs significantly faster and is even more accurate.
Lexiathan also doesn't have any edit distance parameters that need to be configured, so there is no "tuning" required. In particular, it's worth mentioning that using a very large dictionary, e.g. 500,000 words, often degrades accuracy rather than improves it, and likely increases memory usage and latency as well.
Regarding Norvig's 98.9% figure--this seems to be from Norvig's own made-up data. In the real world, users often generate misspellings that exceed 2 edit distances in many use cases (OCR, non-native speakers, medical/technical terminology, etc), and published text (often already spell-checked) doesn't reflect the same level of errors. And in any case, Norvig's spell-checker apparently only achieves an accuracy of 67% on its own chosen benchmarks, so clearly the 98.9% figure is not a realistic reflection of actual spell-checker performance, even for an edit distance of 2. Lexiathan is extremely accurate and retains high performance even on heavily degraded input, and the benchmark data (and demo) that I presented reflect that.
Can the index size exceed the RAM size (e.g., via memory mapping), or are index size and document number limited by RAM size?
It would be good to mention those limitations in the README.
The most widely used DHT is Kademlia from Petar Maymounkov and David Mazières.
It is used in Ethereum, IPFS, I2P, Gnutella DHT, and many other applications.
For the latency benchmarks we used vanilla BM25 (SimilarityType::Bm25f for a single field) for comparability, so there are no differences in terms of accuracy.
For SimilarityType::Bm25fProximity which takes into account the proximity between query term matches within the document, we have so far only anecdotal evidence that it returns significantly more relevant results for many queries.
Systematic relevancy benchmarks like BeIR, MS MARCO are planned.
In SeekStorm you can choose per index whether to use Mmap or let SeekStorm fully control Ram access. There is a slight performance advantage to the latter, at the cost of higher index load time of the former.
https://docs.rs/seekstorm/latest/seekstorm/index/enum.Access...
SeekStorm does currently not use io_uring, but it is on our roadmap.
Challenges are the cross-platform compatibility. Linux (io_uring) and Windows (IoRing) use different implementations, and other OS don't support it. There is no abstraction layer over those implementations in Rust, so we are on our own.
It would increase concurrent read and write speed (index loading, searching) by removing the need to lock seek and read/write.
But I would expect that the mmap implementations do already use io_uring / IoRing.
Yes, lazy loading would be possible, but pure RAM access does not offer enough benefits to justify the effort to replicate much of the memory mapping.
Unfortunately, the comparison of Lexiathan vs. Symspell on your website regarding accuracy is misleading.
1. SymSpell has two parameters to control the maximum edit distance. Once you set both to 3, then also terms with an edit distance of 3 are accurately corrected:
2. SymSpell comes with dictionaries in several sizes. Once you load the 500_000 terms dictionary, then also the two remaining terms will be corrected: https://github.com/wolfgarbe/SymSpell/blob/master/SymSpell.B...SymSpell accurately corrects all of your examples if used properly with the correct parameters and dictionary.
Apart from that, your methodology of comparing correction accuracy by cherry-picking specific terms without statistical significance, where your product seemingly performs better, is questionable.
One would use large public corpora to measure the percentage of accurately corrected terms as well as the percentage of false positives.
Because SymSpell is Open-Source, everyone can integrate it into their applications for free, modify the code, use different dictionaries in various languages, or add terms to existing ones.
https://github.com/wolfgarbe/SymSpell
https://github.com/wolfgarbe/symspell_rs