Solaris 10 virtual disk (ramdisk) create for sun4v (T-2000 simulator) architecture

have been trying to create a 2 GB ramdisk (virtual) to run on my T-2000 simulator (Legion) which has sun4v architecture. I have a SPARC workstation which runs on sun4u architecture with Solaris 10.

I have created a ramdisk image using dd command, newfs, then used ufsrestore to restore the filesystem taken from the original virtual disk (disk.s10hw2 - sun4v virtual disk that comes with the simulator - 512 MB) and installed bootblk on the new ramdisk (used the sun4v bootblk). But when I try to boot this newly created ramdisk (2GB), it starts loading smf services and then gives fatal error saying "fmd" service daemon error and goes into some sort of maintenance mode or would just freeze.

The reason I need this bigger disk is, I need to install the SUNWpool package which is not available in the original ramdisk (disk.s10hw2). I am able to load the newly created ramdisk on Legion on slice 3 but when I try to add pkg (/export/var/spool/pkg SUNWpool), it complains:
"Cannont find required execuatble /usr/bin/7za"

can someone guide/point me step by step to create a bigger ramdisk with all these packages. I am just stuck and trying different mix and match, still no luck..

I would really appreciate your help.

Hi,

If you have the ramdiskadm utility, then all you should have to do is;

  • Create the Ramdisk with :- ramdiskadm -a "name" 2g
  • Then create the filesystem using newfs.
  • Then set the requred boot block
  • Finally mount the file system

I would show an example if I could but due to changes here, all my solaris installs are virtual and very small and creating a ramdisk would probably impact on performance also I have a change control board to contend with.

Give the above a try and let me know how you get on.

Dave

Hello Dave,
when I tried doing ramdiskadm , I get error message of Resource temporarily unavailable
I have generally followed this procedure to create a new ramdisk (Cloned from the sun4u architecture):

Create backup of the root folder "/" (sun4u):

# ufsdump 0cvf /dev/rmt/0 /

This backup is almost 8GB.

Created an empty file of 8GB size:

# dd if=/dev/zero of=new.img bs =1024 count=8388608

Used the new.img as a block device

# lofiadm -a new.img /dev/lofi/1

Created a UFS on top of the new.img file

# newfs /dev/lofi/1

Restored the initial filesystem using ufsrestore

# ufsrestore rvf /dev/rmt/0

Made the ram disk image bootable

# /usr/sbin/installboot /mntdir/usr/platform/sun4v/lib/fs/ufs/bootblk /dev/rlofi/1

Unmounted the image

# cd /
# umount /mntdir

Modified 1up.conf file in legion to the following:

  device "memory" 0x1f80000000 +8192M {  // Increase the memory size for the new disk
  virtual_disk;
  load s0 shared "disk1.big";  // replace the disk with the new one you created and use shared instead of rom
  }

Booted on Legion (sun4v architecture) using boot -v command.

Problem is, it starts loading smf services and after loading all of them (89/89) it gets error message that fmd service error and a bunch of error codes with it. eventually gets into maintenance mode.

I need to run Solaris Resource Pool Management (pooladm, poolcfg etc) which uses SUNWpool package. The original ramdisk is too small and does not have this package installed. Even when I tried installing the package on the original ramdisk by putting SUNWpool package at the /var/spool/pkg, it installed successfully but unable to turn on the service daemon (poold) by saying

pooladm: couldn't open pools state file: No such file or directory

I am not completely sure what am I doing wrong in the procedure. :confused:

Hi,

I'm getting the resource issue as well, unfortunately I don't really have time to investigate at the moment.

However there may be a much simpler way of doing this, as you are using ufs on these systems - check out the following if you have time and a good backup:)

Change the config of the existing ramdisk in the legion config to the size that you reqire, I assume that it will still show up as a 2Gb drive as there will be additional parameters that have to be configured to make the drive active.

Then it may be possible to use the 'growfs' command to expand into the allocated space, you may want to try that on the copy made using "dd".

Regards

Dave