Is there any way to translate a file name to the underlying file system's disk blocks/sectors/extents on UFS (Solaris OS on Sparc)?
I found several ways to do it on linux file systems like ext2/3/4, using command like hdparm -- fibmap and filefrag.
I also found one equivalent way to get that mapping using a plugin called filestat on UFS, but I don't want to use a program I downloaded without seeing the source code first because I need to use this solution on customer's servers too.
Any other ideas on how to get the underlying disk blocks for a file?
If you want something you can check the source code, I guess a dtrace script analyzing a whole file read operation would help. You would need to flush the cache first though.
I need it so I can find the disk blocks a file is in and read straight from the disk afterwards.
I was actually hoping for a solution other than trying to get the source code for filestat. Maybe some other Solaris command or API that can give the same result.
I have a big endian server that writes files to a LUN that is on a small endian machine. I then need to read those file from a 3rd small endian server. (can't change that). I thought maybe if I write the physical disk blocks the files are in to the small endian server I can then read those disk blocks directly on the 3rd server to avoid having to somehow read the big endian file-system meta data on a small endian server.
Reading raw disk blocks is going to be a lot more effort and a lot less reliable than arranging a better way to get at the file, like a NFS share, batch download, etc.