Sorry to butt in, but as an aside to your main question, I'd be far more worried about this before thinking about doing anything else:
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
oradata 500G 473G 27.5G 94% 1.00x DEGRADED -
The oradata pool is in a degraded state, which means it's experienced a failure of some kind and so its redundancy is likely to be severely reduced or non-existent. As a priority, you probably want to run zpool status -v oradata and see what has actually gone wrong and sort that out.
Of course, if the oradata pool isn't in use any longer or you otherwise have reasons not to care about its data then that's fine I suppose. But I just thought I'd mention this, since if it were me, and this is a live pool, I'd want to sort this out urgently before doing anything else.
They are not using that, at the moment. the output of the status is:
zpool status -v oradata
pool: oradata
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata DEGRADED 0 0 0
c4t500A09828DE3E799d0 DEGRADED 0 0 0 too many errors
errors: No known data errors
root@solaris:~#
OK. Well, if you're 100% absolutely sure you don't care about anything that happens to the data on that pool, then I suppose there's nothing you need to do about it.
Returning to your original question: firstly, I have to say I'm relatively new to ZFS myself, having only recently come back to Solaris on a regular basis. But from what I've learned thus far, the first thing is (as per the earlier advice from DukeNuke2) to figure out what kind of pool you've got. The Oracle documentation he linked you to this morning is also excellent, and covers pretty much everything you need to know in terms of how to add things.
So take a look at the output of zpool status -v oradata1 , find out if it's made of mirrors, some kind of RAID, what devices are in it, and then once you know that you'll be able to determine the best way to add a device to it.
You are strongly advised to leave more free space in active ZFS pools as otherwise, the performance will suffer a lot due to the system trying to find free space and the data fragmentation resulting from the lack of contiguous areas. Not exceeding 80% is a safe bet.
Back to the topic, the "long" name reported for the oradata disk (c4t500A09828DE3E799d0) suggests a storage array is used to build this pool.
Exanding the oradata1 pool might then be achieved simply by enlarging the underlying LUN. Depending on your ZFS settings, nothing more might be required.
You might want to post these commands output for us to get a better idea about your configuration:
cat /etc/release
zpool status oradata1
zpool get autoexpand oradata1
cfgadm -al -o show_SCSI_LUNS c4
please find the output of the requested commands below:
root@solaris:~# cat /etc/release
Oracle Solaris 11 11/11 SPARC
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Assembled 18 October 2011
root@solaris:~#
root@solaris:~# zpool status oradata1
pool: oradata1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
oradata1 ONLINE 0 0 0
c5t500A09819DE3E799d1s6 ONLINE 0 0 0
errors: No known data errors
root@solaris:~#
root@solaris:~# zpool get autoexpand oradata1
NAME PROPERTY VALUE SOURCE
oradata1 autoexpand off default
root@solaris:~#
It now looks pretty much certain that you're using a SAN or some other kind of managed storage solution at the back-end here, rather than internal disks or other simple local storage. As per jlliagre's reply earlier, using that storage solution to enlarge the underlying LUN would be the best way forward here in all likelihood.
What is it you're using as a storage solution in this platform ? Whatever it is, it almost certainly provides a way to enlarge a LUN by one means or another. Once that's been done, you can then look at growing the ZFS pool to fill the new available size.
OK. I'm not myself familiar with NetApp storage solutions, but I'm sure there will be some management interface you can use to administer it no doubt. If NetApp permits you to non-destructively enlarge the underlying LUN without interrupting normal operation (you need to be totally sure about both aspects of that before doing this with the filesystem still mounted), then once you've enlarged the appropriate LUN you can come back to looking at expanding the ZFS pool.
I dont have access to NetApp cli, the NetApp admin does it for me, on my side (operating system wise) If I am going to replace current oradata1 disk with newdiskname would I not be destroying current data on oradata1 ?
There is an excellent guide on the oracle site here - - as I read it (but I've been known to be wrong). I can't really understand the logic of a single disk in a zpool, as there are no data replicas available that allow the benefits of zfs - you might as well use UFS at least you have more recovery tools.
Obviously the best approach would be to grow the original LUN or have the SAN admin people do this, then grow the FS into it. In any case you should ensure that you have a backup of the data before you start doing anything.
A single disk pool is a poor practice, but unfortunately widespread when storage arrays are used. ZFS is still beneficial when using a single disk pool though.
There is double or triple redundancy in the metadata, that makes the file system resilient to moderate disk corruption.
You can also enable ditto blocks (copies) so data itself would recover from some disk blocks issues.
ZFS will immediately spot corrupted data or metadata while UFS will return corrupted data without notice and might panic the OS if metadata is corrupted.
Finally, telling you have more UFS recovering tools is questionable. With ZFS, you can recover from situations where UFS would be helpless.
#1. I would never build a zpool without the following conditions being met.
a. multipathing on the netapp AND the solaris box being configured and working correctly. #2. At a minium 2 luns in a zpool mirror, a raidZ would be better.
if these conditions are met, you'll probably never lose data. While it's all fine and good to say that Netapp handles the redundancy using huge aggregates, all that really does is buy you insulation from hard disk failure, not corruption or loss of connection. the zpool is likely in a degraded state due to loss of comms or poor network performance. Thus the multipathing suggestion.
to answer your specific question though, the way I have handled this in the past is to grow the lun and then just turn on autoexpand. that will grow the zpool to meet the new lun size.