Hi
I have a big txt file containing 3000 records, please help to split into multiple file by 1000 records for each file.
one line contain 1 record.
if line/record reached 1000 line it will create a new file.
Every file should contain 1000 records/line.
many thanks
Can you show a sample of a record in the input file?
something like this:
#!/bin/sh
awk 'NR<1001{print}' inputfile>> output1
awk 'NR<2001{print}' inputfile>> output2
awk 'NR<3001{print}' inputfile>> output3
There is probably a more compact way to do this with a loop, but this will do the job.
mirni
September 30, 2012, 1:10am
3
@nextyoyoma :
awk 'NR<2001{print}' inputfile>> output2
will write 2000 lines into output2, not 1000 as requestedd per OP. Similarly the third awk command.
@OP :
If you have split(1) it's trivial:
split -l 1000 input outname
On my system, the default is to split every 1000 lines, so actually all you'd need is
split input outname
but check man split
for your implementation.
If you don't have split, here is an awk solution:
awk '{cnt=int((NR-1)/4); print >> "outname_" cnt}' input
It will create files
outname_0
outname_1
outname_2
etc.
pamu
September 30, 2012, 3:56am
4
try this...
$awk -v a="1" '{if(!(NR%1000)){print > "output_"a;a++}else{print > "output_"a;}}' file
$ ls output_*
output_1 output_2 output_3
file is divided into 3 output files..
59696934
0929295A
039399494
......
so on...
each line contain one record. if i have 3504 line/records , i need to cut this into 4 files, 1000 per file but the last file or the 4th file will contain 504 record/line only. As long as the file can cut into 1000 irregardless of the last file.
split is not working in my environment so i can only use awk command..
please help
Have you tried the awk command given 2 days ago, then?
thanks a lot, I have used the split instead to split the file and it perfectly works.