Undeletable file

If find doesn't work, no other user-mode application is going to work. The filename is invalid and the kernel refuses to touch it. You need a disk editor.

Like MadeInGermany, I'm not a MAC user either so I'm now going to talk generic Unix/Linux only. You will need a MAC expert to guide you if you want to use anything I'm going to say now.

Many Unix/Linux OS's implement an often undocumented command called clri which will destroy an inode (by writing zeros to it). A nuclear option. A quick search on Google tells me that MacOS implements this command too. I also see that it implements fsck_hfs .

Therefore my final nuclear option on an OS I'm expert on would be:

  1. Ensure that you have just completed a backup of the filesystem (and perhaps preferably the whole system) and know how to restore if it goes wrong. Keep users off afterwards.
  2. ls -li has given you the inode number of zombie so run clri to nuke it. BE CAREFUL to specify the correct filesystem on the command line if you have more than one filesystem otherwise you could zap the wrong inode. I cannot give you the MacOS syntax. Try man clri to see if it's offically documented.
  3. Once the inode is nuked go into single user mode and run fsck_hfs . The allocated blocks for zombie should show as "missing blocks" and the utility should ask your okay to fix the superblock.

Now I repeat, this is a generic suggestion and you should await input from a MacOS expert on here as to whether they think this method is a goer. I take no responsibility whatsoever but it is how I would fix such a problem on some Unix/Linux filesystems.

This will be my last post to this thread because I'm out of my comfort zone.

1 Like

MadeInGermany and hicksd, thank you very much. As advised, I will proceed with extreme caution.

If there are any MacOS experts listening, I'd appreciate your input.

cheers,
dp

This is also my last attempt.
YOU USE THIS AT YOUR OWN RISK!
I discovered that Windows CP 1252 certainly does have these characters as single bytes in it.

#!/bin/sh

# Windows 1252 code page 8 bit values, (with UTF-8 unicode values shown)...

NULLS=$'\x80\x80'
REG=$'\xAE'
TM=$'\x99'
FILENAME="${NULLS}"'Word Finder'"${REG}"' Plus'"${TM}"

# Should be 20 characters long.
echo "${#FILENAME}"

echo "${FILENAME}"

hexdump -C <<< "${FILENAME}"

##### !!!YOU USE BELOW AT YOUR OWN RISK!!! #####
# rm -f "${FILENAME}"

Example printout:

Last login: Fri Jul 19 17:52:27 on ttys000
AMIGA:amiga~> cd Desktop/Code/Shell
AMIGA:amiga~/Desktop/Code/Shell> ./Invalid_Filename.sh
20
??Word Finder? Plus?
00000000  80 80 57 6f 72 64 20 46  69 6e 64 65 72 ae 20 50  |..Word Finder. P|
00000010  6c 75 73 99 0a                                    |lus..|
00000015
AMIGA:amiga~/Desktop/Code/Shell> _

REMEMBER! You are messing with a file that is NOT native to OSX 10.11.x.
Save/backup everything you need before removing the final '#' on the last line and then executing this code.
WWW.UNIX.COM and its helpers in this thread do NOT hold any responsibility for anything that goes awry.

IMHO a glob match returns even unprintable characters.
So the last proposal does not improve anything.

And the clri command (didn't know it yet), will cause extra harm.
It is useless to clear the inode of the file, because an inode contains the data block structure and all meta information BUT THE FILENAME.
The filename and the pointer to the inode is stored in the directory.
The directory is another inode. To have an effect you must clear the directory inode.
I expect less damage with

unlink <directory>

if supported by the OS. If successful, you need a full file system check, in order to collect the dangling file inode and its data blocks.
I successfully did that a couple of times on a non-journaling UFS in single-user mode (no desktop, no other processes running).

I would go for a disk editor. Find the offending name, compare the shown bytes with the characters from the ls command, change one byte, save, compare again, ...

@MadeInGermany............Your synopsis of the filesystem structure is, of course, absolutely correct. The filename is in the directory inode but in my experience, if you nuke the file inode when fsck comes along it will see that the directory entry points to a now non-existant inode and ask approval to remove the file. But yes, if fsck falls over at that point because the filename is illegal or doesn't clear the file entry, then you are right that the directory inode would need to be identified and zapped in the same way too (which is possible because the file in question is the only entry in the directory). fsck can then be run again to clear up the mess. I've done this many many times when working on filesystem internals, the problem these days is that many OS's do not provide or implement clri .

I take your point that unlink directory is an easier approach if it works so let the OP try that first.

NOTE: In this case for fsck read the MacOS fsck_hfs .

Here is an OSX man page for clri

https://www.unix.com/man-page/osx/8/clri/

Also available as an OpenDarwin man page:

https://www.unix.com/man-page/opendarwin/8/clri/

Quoting:

3 Likes

I don't use OSX, but perhaps this would work.

ls -i
1180383 weird-file.txt
jon@ranger:one space$ find . -inum 1180383 -delete

As a side (not directly related to this discussion):

I recommend that when you have any filename with some "tricky" special chars which makes the file hard to delete that you first move the file to a new file name (without the "strangeness" which makes the file undeletable) and then delete the file.

You may want to read this little treatise.

I hope this helps.

bakunin