Searching a large file for short tandem repeats

Hello,
I am searching large (~25gb) DNA sequence data in fasta short read format:

>ReadName
ACGTACGTACGT...[150charactersPerRead]

for short tandem repeats, meaning instances of any 2-6 character based run that are repeated in tandem a number of times given as an input variable. Seems like a reasonably simple job, but I'm having trouble developing a regex that will work. As a start, I have:

cat infile.fasta | awk --posix  '{STR="([ACGT]{2,6})" ; if (substr($0,40,(length()-40)) ~ STR) print}'

The substring constraints have to do with downstream requirements. But, I'm having trouble integrating in the regex that I want repeats of discrete motifs, not ANY 5 or more repeats (for example) of ANY 2-6 bases, which obviously returns every read.

Any ideas would be great, thanks for the help!

Seems more like PERL or C for this, C for max speed, with input file mmap()'d, 64 bit OS/compile is nice to avoid remapping. It seems to be a 40 byte sliding window check for instances of the intiial 2-6 characters, then move window forward one. Do you just want counts? If ABC is reported as repeating, do you want AB and BC reported, too? It seems like you may get more output than input if it is not just aggregates. One process/thread to search and another to aggregate?

I'm sorry, but I don't understand what the problem is.
What's the pattern you are looking for? Your STR variable will match everything...
Tandem means "exactly two"?

Please give way more details.

In pseudo code, I think he wants something like:

(
read 40 bytes to buffer (ensure full window)
for len in 6 5 4 3 2
do
 compare buffer + 0 for len bytes to substrings: buffer + len through buffer end - len, tracking current total file offsets.
 if a hit,
 then
  list the hit "Pattern offset1 offset2"
  move the window up by len
  loop back to restart the 'for'
 fi
done
 
move window up by 1 for no hits
exit (or special EOF adjust code) if EOF prevents full window
loop back to restart 'for'
) | sort | tee detail_file | cut -f 1 | uniq -c > summary_file

By searching the longest pattern and nearest string first, you avoid duplicates of substrings (AB in ABC) or three within the window (for 3, you get two detail records, first to second and second to third).

You need some minimally intrusive code to deal with end of file, or pad the file with 38 non-cap-letter bytes, perhaps using "(...;echo...)|" as input.

Managing the window without repeatedly sliding bytes in the buffer is a bit tricky, using either an oversized buffer so slides are less frequent or mmap64() of the entire file (not pipe friendly, use padded file or end of file special code?).