I have a big text file. I want to extract all the sentences that matches at least 70% (seventy percent) of the words from each sentence based on a word list called A.
Say the format of the text file is as given below:
This is the first sentence which consists of fifteen words including AAA, BBB and CCC.
This is the second sentence with twelve consisting of XXX and YYY.
This is the third with nine consisting of KKK.
The last sentence consist of ZZZ, DDD, FFF, EEE, GGG and HHH.
The output format is based on the availability of all the words from the words list A. If the sentence matches all the words from the word list, then the sentence will be extracted. The condition is that at least seventy percent of the words in the sentence must be matched from the words list A.
Assuming that all the capital letter words such as AAA, BBB, CCC, KKK, XXX, YYY, ZZZ, DDD, FFF, EEE, GGG and HHH are not found in the words list A. After the extraction, the output will look like as given below.
This is the first sentence which consists of fifteen words including AAA, BBB and CCC.
This is the second sentence with twelve consisting of XXX and YYY.
This is the third with nine consisting of KKK.
I need help to write a script for the above problem. A sample script will be really helpful to me. Thanks in advance.
This is not an assignment at all. This is for my personal interest for processing text with different coding.
I have tried this using C code instead of scripts but I would prefer script because they are comparatively faster and convenient to put in the pipeline. Moreover, unix scripts are convenient for processing text. Bash shell is the one I use.
There is no actual sample of word list or token list.
To clarify, I would say tokens instead of words and sentence as a set of tokens. I use Ubuntu OS.
Please let me know if you need more details.
Thanks in advance.
Let me say first that there's incredibly refined and sophisticated algorithms out there, used by e.g. the various search engines to analyse all the internet sites around the globe and to hand you the results in a split second, so anything posted here is a clumsy approach cobbled together without any optimisation. Anyhow, try
awk '
FNR==NR {T[$1]
next
}
{CNT=0
n=split (tolower($0), L)
for (i=1; i<=n; i++) if (L in T) CNT++
# print CNT, CNT/n
if (CNT/n >= 0.8) print $0
}
' list text
You may want/need to get rid of punctuation first in a real world sample.
Changing "word" to "token" and "sentence" to "set" doesn't clarify anything. Changing two undefined terms to two other undefined terms still leaves us with no defined terms. If you refuse to explain what you want your code to do, there is no reason for any of us to waste our time trying to guess at your requirements, nor to try to write code when we don't know what the code is supposed to do. Why did you explicitly say that you had a "word list A" if there is no word list? Your original requirement was:
which can now be restated as:
With requirements like this, it looks like a homework assignment that you want us to complete for you.
If you already have a way to do this and just want to write it in a different language, show us the C code that you have written that you now want to translate to shell code. They we would be able to deduce your definitions from you C code and know what it is that we're trying to do. (But, don't claim that you are converting from C to shell to make the code faster; for any particular task, well crafted C will almost certainly be faster than a corresponding shell script. And, there is absolutely no reason to claim that C code can't be used in a pipeline. Almost all of the standard utilities on UNIX and Linux systems are written in C and many of them are perfectly capable of being used in a pipeline. Changing C code that can't be used as a filter to a shell script won't magically turn it into a filter.)
RudiC made a valiant effort to help you get a start on your problem, but it ignores the fact that you don't have a list, assumes that tokens (or words) include punctuation, assumes that <sentence> or <set> and <line in a text file> are synonymous, ignores the requirement to ignore uppercase <words> or <tokens> from you nonexistent list, and uses 80% instead of 70% as the threshold.