Install Solaris 11 with ZFS

Hi,

I'm not expert with solaris. I Familiar with Linux variant only. Could anyone point me to right tutorial?

I got one >

but not sure it can be use or not.

I'm doing fresh install for new server. The server spec did not come out yet. Please assist me.

Thanks.

Are you installing the freebie Solaris on x86? Please tell us more about your hardware.

Server Spec:

SPARC T4 8-core 2.85Ghz
128GB Memory
600GB HDD x4
Oracle Solaris and Oracle VM Server for SPARC preinstall (for factory installation)

Can you clarify your question?
You wrote want the right tutorial, but for doing what precisely ?
To install Solaris 11 with ZFS: any method you like given the fact Solaris 11 cannot be installed without ZFS.
To set-up root pool mirroring: the document you link is outdated, being related to a now obsolete pre-Solaris 11 release. Have a look to How to Configure a Mirrored Root Pool (SPARC or x86/VTOC) - Managing ZFS File Systems in Oracle® Solaris 11.2

You have several choices :

Raid controllers if you have - this will not take advantage of zfs filesystem features, but will simplify administration by lowering the number of devices on the OS layer depending of the raid protection chosen.

Disk slices for rpool(s), mirroring between two physical disks, or even third as a hot spare - this will take advantage of zfs filesystem features and will leave other slices to be used for other purpose.

Depending of the use, if you plan Oracle VM, you might consider using SVM mirroring/raid of slices with single md device presented to ldom which runs rpool on top - this will not take advantage of zfs features but will provide less administration overhead in case of disk failure (remirroring of all disk slices will be done with couple of commands on hypervisor)

I would advise against using ZVOLs as backend devices.

I've been using option 2 and 3 depending, not using controllers.

Hope that helps

Regards
Peasant.

Another advantage of using RAID controllers if you have them: replacing a failed disk is usually really easy with today's RAID controllers:

Remove the failed disk, put in a new disk.

The ZFS features you wouldn't be using by using hardware RAID? You'd fail to be using the ZFS software RAID.

Use the hardware RAID if you have it.

---------- Post updated at 12:42 PM ---------- Previous update was at 12:31 PM ----------

If this is supposed to be a production server that will have a long life and have to go through multiple operating system upgrades and patches, you'll also want to use a total of four disks for two completely separate ZFS root pools. Put the four disks into two separate mirrored hardware RAID arrays.

See the man page for "beadm".

Also, read this:

https://blogs.oracle.com/orasysat/entry/the\_most\_inviting\_solaris_111

One problem with that, in my experience: using just one root pool will result in a convoluted mess of ZFS snapshots and clones as your boot environments evolve over the life of the server. But if you always create a new boot environment in a ZFS pool that's separate from the ZFS pool the source boot environment resides on, there's no mess of ZFS snapshots and clones created.

Why would you want to use boot environments? Because you can create a new boot environment, patch and upgrade the new one while your server is still running, then simply boot to the new environment. And if it fails, you just reboot back to the old one.

It's a lot more reliable than "yum upgrade". Try reverting that if it doesn't work...

Achenle - i was referring to zfs features such as bit rot protection and others which are being used only in case of zpools with raid.
You may use property copies=2 to achieve similar per zfs filesystem.

Why do you think HW raid controllers are better to use then zpool SW raids in case of zpools ?

We have 4 disk and using hardware raid. But i dont know it's configure as what. How to access hardware raid manager? I did use monitor it's not show bios. I do some research on the net but don't find any related document for access hardware raid.

What i'm using stripe mode in zfs? Is it can recover if one disk failure? How to set raid 10 in zfs? I only get complete tutorial about raid 5 in zfs.

This machine will be install with oracle vm on the top of solaris 11. But from the text command to install oracle vm?

There is no BIOS on SPARC hardware but an OpenBoot PROM.

Have a look to this page for RAID configuration.

You cannot use striping or RAID10 with the root pool, only mirroring is supported.

Oracle VM is not on top of Solaris 11, that's this other way around as the hypervisor is built in hardware with UltraSPARC T.

1 Like

How about raidz? Is it same as raid 5? Can i use raidz in sparc t4?

---------- Post updated at 06:27 AM ---------- Previous update was at 06:01 AM ----------

Is this stripe mode?

https://community.oracle.com/thread/2488541?tstart=0

Is it capale i doing this is sparc t4 machine? Now i can see 1 disk attach in zpool. The other 3 is not format yet.

It is not the same, although it share some similarities.

Yes, but not for the root pool.

Precisely.

You can create second pool based on raidz with the three remaining disks if you are looking for both redundancy and capacity.
You can create a stripe with these disks if you are looking for maximum capacity.
For better reliability and performance, you can also use the first disk as a mirror for the root pool and use the remaining two disks as another mirrored pool.

You are also free to use UFS with these extra disks, or just use them or partitions on them as raw devices for ldoms or kernel zones.

Data is already available in disk 1. If i adding new disk to root pool as stripe is there any impact?

As already stated, you cannot add a disk to stripe the root pool.

Is there any step to fast format new disk?

ZFS disks need not to be formatted.

root@solaris:~# zpool create stripe c0t5000CCA0568AF944d0 c0t5000CCA0568CFAC4d0
vdev verification failed: use -f to override the following errors:
/dev/dsk/c0t5000CCA0568AF944d0s0 contains a ufs filesystem.
/dev/dsk/c0t5000CCA0568AF944d0s6 contains a ufs filesystem.
Unable to build pool from specified devices: device already in use

If you are sure you can drop these ufs file systems, add the `-f` flag (force).

If i want to add another disk use this command:

zpool add stripe c0t5000CCA05687FB84d0

Correct?

I have format disk using low level format. What happen if i cancel it? I want to add this disk in current in zfs file system. Is there any problem occur?