How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing.

Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore.

# zpool import
   pool: fido
     id: 7452075738474086658
  state: FAULTED
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        fido                     FAULTED  corrupted data
          c1t0025385971B16535d0  UNAVAIL  corrupted data
 

Since it is not there, I cannot destroy the pool.

I tried deleting /etc/zfs/zpool.cache and rebooting, with no success, the entry is still there.

I also cannot re-attach the disk in order to destroy the pool, since it is not physically in my possession anymore.

How can I clear that entry from being considered by zpool import ?

I read zpool man page, and Solaris pages, like this one, but I could not find any hint.

Use this to destroy the pool

# zpool destroy <pool name>

If it objects to that (and you really mean to nuke it) use the force option

# zpool destroy -f <pool name>

Does that not work?

2 Likes

Thank you, I did try that,

 # zpool destroy -f fido
cannot open 'fido': no such pool
  

But the issue is that pool is not among the ones available.

 # zpool list -v
 NAME                        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool                       476G   214G   262G        -         -     0%    44%  1.00x  ONLINE  -
   c1t0025385971B16535d0     476G   214G   262G        -         -     0%    44%
 

It is not even importable, as listed in previous post:

The pool cannot be imported due to damaged devices or data.

because the physical device is not connected anymore.

Yes, so the 'pool' is gone.

So what are you trying to do??? Use that disk for something else and it won't let you??

I'm not understanding the problem.

I would like to remove the "fido" entry when running zpool import .

Right now the command lists the missing "phantasm" pool, and I can't remove that stray entry, even by deleting zpool.cache .

Can you -f (force) the import, and then -f (force) destroy?

Unfortunately I can't re-attach the device and force the import, because the device is long gone. It was a single-dev pool, too.

# zpool clear fido

???

1 Like

Try :

zpool export fido
devfsadm -Cv

Now check out the if import complains.

Regards
Peasant.

2 Likes

Thank you very much @hicksd8 and @Peasant for suggestions.

Unfortunately, both # zpool clear fido and
zpool export fido; devfsadm -Cv

did not help, I'm still getting same ghost entry with zpool import , same as first post.

I think that's because the single-device pool named "fido" isn't attached, and thus zpool commands won't be able to affect it.

I also tried zpool labelclear -f fido , but it doesn't work, I believe for same reason.

Last night I wondered, if the disk isn't even attached, where does zpool import get that ghost information?

I digged further with the zdb command, which revealed this "label" on disk c1t0025385971B16535d0 , which is the server's boot disk:

# zdb -l /dev/rdsk/c1t0025385971B16535d0
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'fido'
    state: 0
    txg: 30770
    pool_guid: 7452075738474086658
    hostid: 647188743
    hostname: ''
    top_guid: 7525102254531229074
    guid: 7525102254531229074
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 7525102254531229074
        path: '/dev/nvd0p3'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 32
        ashift: 12
        asize: 509746872320
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3
 

So, zpool import seems to read that stray information on the boot disk, related to a long-gone pool, and still believe it is available on the system.

One solution would be to perform a complete reinstall on that machine, wiping boot disk completely with dd before install.

Before doing that, would you know if it's possible at all to safely clear up such a stray zdb entry from a boot disk?

I'm not sure i follow..

So the system is now installed on c1t0025385971B16535d0 (using whole disk), and zpool import complains of that same disk being part of fido zpool which contains the same device c1t0025385971B16535d0 ?
This is quite strange.

Can we see the output of :

zpool status rpool

Also format command and print partitions of that disk would be helpful.

Regards
Peasant.

I completely reinstalled OmniOS, and I was able to replicate the issue with these steps:

  • Install a second NVMe drive
  • Install FreeBSD on that and boot from it
  • Boot again, this time from OmniOS
  • Import the FreeBSD pool (on second NVMe)
  • Poweroff without exporting it
  • Remove FreeBSD NVMe from server
  • Boot from OmniOS

At that point, zpool import will show message as original post.

In that state, there is nothing one can do to remove the stray label from boot disk.

Although not a real bug, I reported that to Illumos devs as feature request, i.e., to be able to remove stray leftover labels caused by above steps.

I am adding the "solved" tag, and I will report back with updates as soon as I have them.

Thanks to all for the help and advice!