Is it possible for you to post five to ten lines from that 17GB and an example of how the output looks like?
That would be a better request. In that way we might know what a column means to you.
---------- Post updated at 10:53 PM ---------- Previous update was at 10:41 PM ----------
Using your last example.
sed -e 's/^1234567890\\|/0\\|/' 17GB.file > Another17GB.file && mv Another17GB.file 17GB.file.
Now, as you see, there's no way around the fact that you need a minimum of 34GB of space and cpu cycles for sed to go through.
sed -i opens a temporary file AFAIK. So be sure that $TMPDIR points to a disk that has lots of free space. IF you have enough RAM, you can direct TMPDIR to the RAM drive.
Hmm, shouldn't sed -i put the temporary file near the source file, to have an efficient move(=rename)?
And the difference to Aia's explicit mv command is only that the owner/mode is preserved.
Not when TMPDIR is in kernel memory (RAM). Solaris and Linux both support this.
The answer: try tracing file open calls in a sed -i command on your system against a dummied up large file, larger than the free space on the current working directory's filesystem. This is clearly contrived but will answer the question. Be sure to define TMPDIR (or whatever )
Your point is well-taken from a UNIX system developers point of view.
I question the wisdom of having files of 17GB size, but that's not here or there.
The intention of post #2 was to point out the facts with such a large file size, thus explicitly doing the redirection to temporal file. Further more, an inexperienced sed user might believe in the magic of not needing a temporary file, even when it is happening behind the scene; impression which I did not want to give.
I have not done a strace or find out what happens if sed command with -i fails due to such a large size. Is there the possibility of original file corruption or partial truncation? To avoid that, I proposed the safest route I thought of.