I have over 10m documents that I want to search through against a list of know keywords, however the documents were produced using a technique that isn't perfect in how the data was presented.
Is there a fuzzy keyword search available in Linux or can anyone think of a way of doing it that isn't horrendously time expensive?
With 500 keywords and average of 10 characters per word that's over 50k 'fuzzy searches' per page to cover all the permutations, for words above 9 characters you'd probably want to have even more then 2 * characters per word which ramps up the number of searches even more.
Well, grep -E would be a bit challenged. If you have the resources, you could for every file get every word as a short line: word file# line#, sort them all eliminating duplicates, merge with a sorted keyword list using join, and now you have an index.
The intermediate list is a very big sort, but the join efficiently trims the list. Perhaps it would help to remove obvious nuisance words like: 'a', 'the', 'and'. The join command needs a flat file, as it likes to seek back to where it started, necessary when doing a cartesian product, but not when one list is unique. I have a streaming join m1join.c that can do this merge on a pipe from the sort.
It's about 30m pages of text (the average appears to be 3 pages per document)
I'm not sure that producing an intermediate list per page would help, assuming 2000 words per 3 pages (the density is quite high) doing that processing would still be horrendously time intensive for 10m documents surely?
It's an interesting idea though and I'll try to throw something together to do some time tests.
I have a reasonable amount of processing resources in terms of a few multicore hyperthreaded machines, so I could allocate about 34 'virtual' machines to this, but even so it's a fair amount of processing!! I've just calculated that even at only 1 second per document processed (likely very very optimistic) it would require about 3.5 days with all the virtual machines running 24x7. If, as is more likely, each document is taking say 10 seconds per process we're now into 35 days of 24x7.....Yikes!!
I was hoping there would be a standard function or prog that I could use and just pump the keywords in then point at the pages, ho hum back to the drawing board!!
I think you can easily find a "fuzzy" indexer to run on Linux.
If you find one (in PHP), let me know. I may implement fuzzy search as an additional capability on this site.
---------- Post updated at 17:21 ---------- Previous update was at 17:18 ----------
OBTW, as a side-note, you could probably use a Bayesian classifier to assist in building a fuzzy searcher or indexer. I've not look into this, but a bit of Google'ing around might yield some useful peach fuzz
---------- Post updated at 17:25 ---------- Previous update was at 17:21 ----------
however, it sounds like you need only the glimpse package. If I recall correctly it includes glimpseindex and glimpse. The index files are not small, but storage is cheap these days.
An example from man glimpse:
glimpse -1 'Tuson;Arezona'
will output all lines containing both patterns allowing one spelling
error in any of the patterns (either insertion, deletion, or substitu-
tion), which in this case is definitely needed.
There are lots of options for the indexing and searching.
You could test how you like the fuzzy search by installing agrep and using that on a few text files without doing the indexing. The agrep package I use is available in my Debian repository.