The cleanest way is to restore from backup and proceed. Furthermore i suggest to investigate what happened to cause this problem. Normal operation doesn't lead to this type of error. A conceivable common cause is a failing disk which "hdisk type 4" errors have been ignored for too long and now manifest as a "hdisk type 3" error (which is permanent - replace the disk).
In case of absence of a backup (which should lead you to some grave remodeling of your backup strategies and -processes) you might try what the article suggests. You need to locate only one good copy of the superblock to copy it over. Because the superblock is a central datastructure of a filesystem you cannot mount it or do anything else with it as long as you have not corrected this.
root@omega /home/root >dd if=/dev/fslv06 of=/dev/null bs=1024k
dd: 0511-051 The read failed.
: The specified device does not exist.
0+0 records in.
0+0 records out.
May i ask if there is any device at all? what does "lsvg -l" say?
And another thing: have you issued this command exactly as you have written:
lquerypv -h /dev/hdx 1000 100
??
Of course you need to lquerypv not a "/dev/hdx" but your device in question, "/dev/fslv06", yes?
One more thing: please present your data COMPLETELY, with any (diagnostic) output. "didn't work" is less than no information at all and since you - sitting in front of your screen and seeing everything - are not able to diagnose your problem correctly how are we supposed to do it with even less information than you have? "lquerypv" must have had some output, at least some diagnostic message. Post that instead of "didn't work" which can mean anything and then some.
Bakunin, in this array i have not anymore important information. I recreated databases in other fs and i don't need it anymore btw is a raid 5 of 400Gb and i want to recover that space.
Yesterday i was "playing" and now i have destroyed all and i need to create again.
But i can't see the device
Yesterday
root@omega /home/root >lspv
hdisk0 0000136a0c238af1 rootvg active
hdisk1 0000136a0c696b1f None
hdisk2 0000136a77be4160 vg05 active
hdisk3 0000136ac7b2c974 None
lsdev -Cc disk
hdisk0 Available 04-08-00-5,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 04-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 0A-08-02 MPIO Other DS4K Array Disk
hdisk3 Available 0A-08-02 MPIO Other DS3K Array Disk
root@omega /home/root >rmdev -dl hdisk3
hdisk3 deleted
root@omega /home/root >cfgmgr
root@omega /home/root >lsdev |grep hdisk
hdisk0 Available 04-08-00-5,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 04-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 0A-08-02 MPIO Other DS4K Array Disk
Now:
lsdev -Cc disk
hdisk0 Available 04-08-00-5,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 04-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 0A-08-02 MPIO Other DS4K Array Disk
I don't have a idea how to create hdisk3 again (at storage lvl is everything ok)
OK, but in this case the discussion about reviving superblocks is moot: you can't revive a superblock on a FS located on some disk cannot see. The question is not: "how can i repair the superblock", but "how do i get back my disk". Only when this is done we can find out if the superblock needs to be repaired and - if so - how this can be done.
You cannot "create hdisk3" because hdisk3 is a device file created by a driver as an interface to a device it uses. As far as i can see you probably have storage box (DS4k, DS3k) connected via some FC-switch and you probably have some FC-cards either in your system or in a VIOS which serves the LPAR some virtual adapter.
"hdisk3" now is: the DS3k serves a LUN (a virtual "disk" construct), which is propagated via some FC-connection to your system. At your system you probably have some fcsX Adapters (X representing some number, "fcs0", "fcs1", ...) like this:
bakunin@some-lpar # lsdev -Cc adapter
ent0 Available Virtual I/O Ethernet Adapter (l-lan)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
fcs0 Available C4-T1 Virtual Fibre Channel Client Adapter
fcs1 Available C5-T1 Virtual Fibre Channel Client Adapter
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter
The driver (we use EMC storage so our drivers are different from yours, but the big picture is the same) picks up what comes over the FC link and tells the system: "i serve you something which you can think of to be a disk. For this i give you a device file and call it /dev/hdisk3 - behind which i will wait and will take whatever disk request you put there and translate that to commands the real storage can understand. Don't bother to look for real existing iron, it just isn't there."
When the cfgmgr runs instead of searching for hardware it will tell the driver to scan the FC-line for anything only the driver understands and - if it finds something worthwhile - tell that the cfgmgr so that a device file will be created.
This means for you: if you cannot get hdisk3 when running "cfgmgr" this means that the driver cannot find anything worthwile and this means that your disk is not visible to the system at all. There are several possible reasons for this, but you will have to investigate these at your own:
driver incompatibility
Maybe you need a new driver because the old one cannot find anthing because of a version problem. Update (or - rarely - downdate) the driver and proceed
zoning problems
You do not want every host connected to the storage to see every disk the storage serves. For this you create something similar to VLANs in network - zones. A zone is basically a bunch of "initiators" - LUNs and adapter ports, identified by a WWN, which is analogous to a MAC address - grouped together and allowed to see each other.
storage problems
Storage is usually not only a long list of LUNs but has some layers to help virtualizing storage as much as possible. Maybe in one of these layers something changed and your LUN is not propagated to the zone where you expect it to be any more.
hardware problems
Last but not least: there is a complex machinery at work, which sometimes can (and, over time, is guaranteed to) fail. Check cables, ports, adapters, FC-switches, maybe virtualization layers (SVC, ...) and similar things. Look at your machines error report (issue "errpt" and see what comes up).
Bakunin now i understand that maybe the problem is in the storage (i had before problems with that storage)
y destroyed the array and recreated it (now is running the creation process that maybe will spend more than 1 hour). I linked with my server and when it finish i will rerun cfgmgr to see if it is something new.
ill post when i have news (today of course)
Regards!