Replace first column in 17GB File

Hi All,

I have a 17GB file and i want to set the first column from 1234567890 to just 0 , how can i do it using sed command?

thanks,

---------- Post updated at 03:09 PM ---------- Previous update was at 03:07 PM ----------

btw, my file is delimited by "|" and 1234567890 is just an example value of first column.

it's like this:

1234567890|abcddc|sfcdcd|12-12-2015|
212|fddfdgg|sdfdferruut|12-30-2014|

Is it possible for you to post five to ten lines from that 17GB and an example of how the output looks like?
That would be a better request. In that way we might know what a column means to you.

---------- Post updated at 10:53 PM ---------- Previous update was at 10:41 PM ----------

Using your last example.

sed -e 's/^1234567890\\|/0\\|/' 17GB.file > Another17GB.file && mv Another17GB.file 17GB.file.

Now, as you see, there's no way around the fact that you need a minimum of 34GB of space and cpu cycles for sed to go through.

No \\ and no \ because sed uses a simple RE (or BRE)! And a single -e option can be omitted.
If your sed has -i option

sed -i 's/^1234567890|/0|/' file

sed -i opens a temporary file AFAIK. So be sure that $TMPDIR points to a disk that has lots of free space. IF you have enough RAM, you can direct TMPDIR to the RAM drive.

Hmm, shouldn't sed -i put the temporary file near the source file, to have an efficient move(=rename)?
And the difference to Aia's explicit mv command is only that the owner/mode is preserved.

Not when TMPDIR is in kernel memory (RAM). Solaris and Linux both support this.

The answer: try tracing file open calls in a sed -i command on your system against a dummied up large file, larger than the free space on the current working directory's filesystem. This is clearly contrived but will answer the question. Be sure to define TMPDIR (or whatever )

Your point is well-taken from a UNIX system developers point of view.

My Solaris 10 system with TMPDIR defined elsewhere does this with gsed (Solaris version of GNU sed)

open64("somefile", O_RDONLY)                    = 3
open64("./sedz2dEGN", O_RDWR|O_CREAT|O_EXCL, 0600) = 4

Which is what I think is a major problem for large files when RAM drive is available.
Hmm.

I question the wisdom of having files of 17GB size, but that's not here or there.
The intention of post #2 was to point out the facts with such a large file size, thus explicitly doing the redirection to temporal file. Further more, an inexperienced sed user might believe in the magic of not needing a temporary file, even when it is happening behind the scene; impression which I did not want to give.
I have not done a strace or find out what happens if sed command with -i fails due to such a large size. Is there the possibility of original file corruption or partial truncation? To avoid that, I proposed the safest route I thought of.