All of these solutions require processing (beyond simple reading) the entire line. It would be simpler to look for the first non-numeric, or, depending on the OP's needs, the more restrictive first alphabetic character.
Why use awk, grep, or sed when it can all be done with ksh built-ins:
while IFS="" read x
do if [ "$x" != "${x%*[!0-9]*}" ]
then printf "%s\n" "$x"
fi
done < test.txt
It would take a fairly large input file to make the invocation costs of starting up an external binary less expensive than the costs of running this simple loop in the shell.
Note that the problem statement is not clear about what should happen if an input line is a blank line. If blank lines are possible in the input, the specification needs to be extended to state what should happen in this case. The while loop above will print non-empty blank lines, but will not print empty lines.
PS Jotne:
What you suggested will work with:
grep -E '[0-9]+'
which uses extended regular expressions, but not with:
grep '[0-9]+'
which uses basic regular expressions.
This last grep command above will only print lines that contain a digit immediately followed by a plus sign.
The equivalent grep '[^0-9]' test.txt is instantly understandable, while that while-loop takes some time to digest.
Another reason is that the simpler the code the fewer the opportunities for surprises. Your while-loop, for example, would behave badly if there are backslashes in the data (the OP made no assurance against this possibility).
If I were going to do this with sh builtins, I would use case . To me it's more obvious.
case $x in
*[!0-9]*) printf '%s\n' "$x"
esac
Perhaps, but I would categorize that under premature optimization.