Is there a 'fuzzy search' facility in Linux?

I have over 10m documents that I want to search through against a list of know keywords, however the documents were produced using a technique that isn't perfect in how the data was presented.

Is there a fuzzy keyword search available in Linux or can anyone think of a way of doing it that isn't horrendously time expensive?

Example Keyword

Banana

Search therefore, case insensitive for...

Banana
Banan*
Banaa
Ban
na
Baana
B
nana
*anana

Bana**
Ban*n*
Ba*an*
B*nan*
*anan*
Ban**a
Ba*aa
B*na
a
*ana*a

and so on.....

With 500 keywords and average of 10 characters per word that's over 50k 'fuzzy searches' per page to cover all the permutations, for words above 9 characters you'd probably want to have even more then 2 * characters per word which ramps up the number of searches even more.

Ideas please?

10m documents = 10,000,000 files?

500 keywords of ~10 characters?

Well, grep -E would be a bit challenged. If you have the resources, you could for every file get every word as a short line: word file# line#, sort them all eliminating duplicates, merge with a sorted keyword list using join, and now you have an index.

The intermediate list is a very big sort, but the join efficiently trims the list. Perhaps it would help to remove obvious nuisance words like: 'a', 'the', 'and'. The join command needs a flat file, as it likes to seek back to where it started, necessary when doing a cartesian product, but not when one list is unique. I have a streaming join m1join.c that can do this merge on a pipe from the sort.

It's about 30m pages of text (the average appears to be 3 pages per document)

I'm not sure that producing an intermediate list per page would help, assuming 2000 words per 3 pages (the density is quite high) doing that processing would still be horrendously time intensive for 10m documents surely?

It's an interesting idea though and I'll try to throw something together to do some time tests.

I have a reasonable amount of processing resources in terms of a few multicore hyperthreaded machines, so I could allocate about 34 'virtual' machines to this, but even so it's a fair amount of processing!! I've just calculated that even at only 1 second per document processed (likely very very optimistic) it would require about 3.5 days with all the virtual machines running 24x7. If, as is more likely, each document is taking say 10 seconds per process we're now into 35 days of 24x7.....Yikes!!

I was hoping there would be a standard function or prog that I could use and just pump the keywords in then point at the pages, ho hum back to the drawing board!!

Is there a Google Desktop for LINUX Xwindows yet? It's a Google world: you have to look to know, and imagine to look. Why, yes:
http://www.google.com/search?q=google\+desktop\+linux&rls=com.microsoft:*&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1

Google does not index use a "fuzzy" algorithm, as I recall.

Google indexes, as I recall, using a Bayesian classifier.

There is a difference (quite a difference) between index and retrieval with a fuzzy algorithm versus indexing with a Bayesian classifier.

---------- Post updated at 17:18 ---------- Previous update was at 17:14 ----------

OBTW, on fuzzy search, read this reference:

I think you can easily find a "fuzzy" indexer to run on Linux.

If you find one (in PHP), let me know. I may implement fuzzy search as an additional capability on this site.

---------- Post updated at 17:21 ---------- Previous update was at 17:18 ----------

OBTW, as a side-note, you could probably use a Bayesian classifier to assist in building a fuzzy searcher or indexer. I've not look into this, but a bit of Google'ing around might yield some useful peach fuzz :smiley:

---------- Post updated at 17:25 ---------- Previous update was at 17:21 ----------

Here is something interesting.....


Approximate/fuzzy string search in PHP

Hi.

I've been using glimpse and agrep for a number of years. I index my files over-night, every night. See

Webglimpse and Glimpse: advanced site search software for Unix : index websites or intranets

however, it sounds like you need only the glimpse package. If I recall correctly it includes glimpseindex and glimpse. The index files are not small, but storage is cheap these days.

An example from man glimpse:

       glimpse -1 'Tuson;Arezona'

       will  output  all  lines containing both patterns allowing one spelling
       error in any of the patterns (either insertion, deletion, or  substitu-
       tion), which in this case is definitely needed.

There are lots of options for the indexing and searching.

You could test how you like the fuzzy search by installing agrep and using that on a few text files without doing the indexing. The agrep package I use is available in my Debian repository.

Good luck ... cheers, drl