I have an XML file with the following requirement to move the <AdditionalAccountHolders> tag and its content right after the <accountHolderName> tag within the same file but I'm not sure how to accomplish this through a Unix script.
Thank you for the awk code suggestions. I tried both of your codes with 2 records in my XML file:
1) code#1 doesn't return any records
2) code#2 does return records but it's putting the <AdditionalAccountHolders> information from record#1 into record#2 and record#1 does not have the <AdditionalAccountHolders> tag in the new file. Please see sample data below:
well... your original sample data contained only ONE holder record - not 2 as in the new sample.
You should be more descriptive in the future...
Let me rework the suggestion with the NEW sample.
The question states that you use XML in an unefficient way. The ordering of an XML file is irrelevant, in terms of standardization undefined, can change spontaneously and with the right tools ordering isn't needed at all.
So scripts, that try to set up an order of elements are likely to break at slightest differences of the XML Layout.
Regards,
Stomp
Update: Some Examples how to read data from that xml file:
I realize that the order of tags does not matter but unfortunately we are building the XML file for an external client and they are requesting that the <AdditionalAccountHolders> info follows right after <AccountHolderName>.
Well that's bad. If possible, tell them how to do it more efficiently.
Using XML in such a way is a constant source of trouble.
Of course that's a matter of company policy wether the own company will do, what the client wants, even if it's total bullshit.
To put it clear
If you do it correctly according to the standards, you have lots of tools, which will help you with the task. If you do not obey the standards, your client is on his own, probably writing very bad and difficult to maintain code.
There are lots of good XML tools out there and libraries are widespread in a lot of programming languages.
Sorry to bother you again - My data requirement has changed slightly (it's one <AdditionalAccountHolders> tag per <additionalAccountHolderName> tag) and the current awk code does not work as expected anymore.
I have highlighted the data changes below for record#2. Would you kindly check the awk code? Thank you.
Well. I'm always interested in competing to awk with other languages. I obviously can not compete in brevity(which is very impressive, when I see the solutions presented in this forum - but they may twist my brains sometimes which seems a horror to me, when coming back to a solution: WTF did I think, when I wrote that pile of crazy code?) so far, but I try to do in maintainability, efficiency(IO-request and memory economy) and runtime speed:
I don't know if you even are able to use ruby, but here's my suggestion in ruby(Just for the fun of learning).
/Rant
#!/usr/bin/env ruby
$handle = File.open(ARGV[0],"r")
$current_line = ""
$ah ="^\s*<accountHolderName>.*<\/accountHolderName>\s*\n"
$aah ="^\s*<additionalaccountHolders>.*<\/additionalaccountHolders>\s*\n"
def chunks()
Enumerator.new do |chunk|
loop do
new = get_chunk()
if !new then break end
chunk << new
end
end
end
def get_chunk()
while !$handle.eof do
current_chunk = ( current_chunk ? current_chunk : "") + $current_line
$current_line = $handle.readline
if $current_line.match(/holder>/) then
return current_chunk
end
end
return current_chunk
end
def reorder(chunk)
ah_current=chunk.match(/(#{$ah})/im)
aah_current=chunk.match(/#{$aah}/im)
chunk.gsub!(/#{$aah}/im,"")
chunk.gsub!(/#{$ah}/,"#{ah_current}#{aah_current}")
return chunk
end
chunks.each {|c| puts reorder(c)}
Use it like this:
reorder.rb data.xml
------ Post updated at 01:56 PM ------
Or here with OOP:
#!/usr/bin/env ruby
class ChunkCollection
def initialize(file)
@handle = File.open(file,"r")
@current_line=@data=""
end
def chunks()
Enumerator.new do |c|
loop do
if data = get_chunk() then
c << Chunk.new(data)
else
break
end
end
end
end
def get_chunk()
while !@handle.eof do
current_chunk = ( current_chunk ? current_chunk : "") + @current_line
@current_line = @handle.readline
return current_chunk if @current_line.match(/holder>/)
end
end
def reorder()
chunks.each {|c| @data+=c.reorder()}
return self
end
def show() puts @data end
end
class Chunk
def initialize(data)
@ah ="^\s*<accountHolderName>.*<\/accountHolderName>\s*\n"
@aah ="^\s*<additionalaccountHolders>.*<\/additionalaccountHolders>\s*\n"
@chunk = data
end
def reorder()
ah_current=@chunk.match(/(#{@ah})/im)
aah_current=@chunk.match(/#{@aah}/im)
return @chunk.gsub(/#{@aah}/im,"").
gsub(/#{@ah}/,"#{ah_current}#{aah_current}")
end
end
ChunkCollection.new(ARGV[0]).reorder.show
------ Post updated at 03:26 PM ------
I would suggest this little change to vgersh solution:
Edit
My change is not needed. Even the umodified version prior to the input data specification change (without f++ but f=1) works.
And well: That awk solution is not really that complicated.... :rolleyes:
Thanks to both Stomp and vgersh99 for spending your time and helping me out. This latest awk code works magically great and the results obtained are as expected:
awk '
/<accountHolderName/ {accH=FNR}
FNR==NR {
if (/<AdditionalAccountHolders/) {s[accH]=(s[accH])?s[accH] ORS $0:$0;f++;next}
if (f) s[accH]= s[accH] ORS $0
if (/<[/]AdditionalAccountHolders/) f--
next
}
/<accountHolderName/ && s[accH] {print $0 ORS s[accH];next}
/<AdditionalAccountHolders/,/<[/]AdditionalAccountHolders/ {next}
1' myXMLfile myXMLfile
The difference will not be noticed on a system when parsing one file.
But in a situation where you need to parse tens of thousands ...
Possibly the ruby code can be written to do it better, but doubtful it will ever surpass awk code in performance.
This is to say if two ideal coders write a program in ruby and awk to do one thing best it can and start forking it
So, to conclude, in my opinion higher level languages are to be used in situations where your program needs many libraries to ease up the job - connect to multiple API endpoints, databases, versioning systems, complex math and such.
You could do all that in awk, but tremendous effort will be required and will beat the propose of short programs which do one thing quick and efficient.
So if one wants speed and low memory footprint, one can tune a lot with [high-level-programming-language] or just take awk
In terms of system calls ruby won here(probably because of the double reading of the file with awk) but it's 10 times slower. I think has some fat base whereas awk is very lean, so as more complex the task is, the less relevant is the basic bloat.
Todays OS and filesystems are smart, they cache, prefetch and similar math magic being done falling into probability and combinatorics division.
So far and deep in HW that they give you other users data when asked nicely
Filesystems will cache the first 10 MB read, so second read will be amazingly fast(er).
Be sure to take above into consideration during testing.
This was not done to compare ruby or awk per se, just to point out not to limit yourself to certain path, but use the right tool for the task.
As for the strace options, i've read the manual a bit before, to find an option, since i was sure GNU stuff has that nicely formatted without effort
Of course. I assume cache is voiding any significant normal read times here. I can create additional processing overhead by reading in too small portions or improve performance by reading larger chunks. This is good, because so now the times here are processing times only.
My curiosity here is NOT "the right tool for the right job" at the moment. My point is: Is [some high-level-programming-language] too bloated and not able to compete in this single task with awk in terms of speed? If not, how much it is behind?
I already tested the same algorithm which is used for awk here in ruby. It's roughly 3 times faster(still 2-3 times slower than awk), but far less elegant than the awk code. That's a first interesting insight. Along with the other realization that line based processing seems to be a lot faster than my chunk-based processing. I've got an idea too, what of my codeparts are a worse and it is good to see actually how much the difference for those "little" things is.