"Phantom" overwrite with dd on sda*?

Greetings.

Just wondering about a little "quirk" which I seem to have found when using dd :wink: (FWIW on that note, I'm using a flash install of Parted Magic to run dd in these tests...)

Thinking about it, there should be some measurable excitement associated with

dd if=/dev/zero of=/dev/sda* obs=10M

as the wildcard should take out each sda partition in turn.

Indeed, if one has multiple partitions to be wiped (while possibly keeping the boundaries for each), this might appear to be one way of getting the job done without much fuss.

However, using the above commandline, I've seen this output repeated on several different systems instead:

dd: writing to �/dev/sda*': No space left on device
2007040+0 records in
97+1 records out
1027362816 bytes (1.0 GB) copied, 1.25925 s, 816 MB/s

...with what seems to be no overwrite activity whatsoever.

So, the central question is: Why do I always see the same exact records in/out @ 1.0 GB output here, regardless of the target machine/drive? Moreover, what does dd think it's writing to in this case?

Thanks!

It is doing exactly what you told it to do: create a regular file with the literal name /dev/sda* and write zero bytes to that file until the fiesystem containing the /dev directory is out of space.

Shell pathname pattern matching doesn't apply here unless you have a directory in the directory where you run this command with the name of= that contains a directory named dev that contains one or more files that match the pattern sda* . And, if there was more than one file matching that pattern, you would be giving dd two or more of=pathname operands, and that would produce unspecified behavior.

The dd utility can only process one output file per invocation.

1 Like

Thanks again, Don.

So, let me get this straight.

We essentially created a file called sda* in a /dev/ folder on the flash drive, and filled it with zeros until there was no more space at the inn (per usual dd practice).

If so, that could explain the constant 1GB output.

However, I saw this over on stackexchange regarding an (unfortunate) use of /dev/sda*

( partition - Ignorantly dd'd /dev/sda* - Unix & Linux Stack Exchange )

Realizing the limitation which dd has with respect to output per invocation, how could this have happened?

Thanks again --

Not essentially; exactly. Use the command:

ls -l /dev/sda*

and you'll see the device files that already existed and the regular file named /dev/sda* that you just created with dd with size 1027362816.

Let me see if I understand this correctly. You looked at a website that never showed what dd command was used, but had a description that started with the paragraph:

and went on to ask if it was possible to recover his system since it would no longer boot. And you decided to try to duplicate what the person who asked that question described as unbelievably dumb behavior.

Instead of us telling you how you can wipe out all of the data on all of your disk drives by copying /dev/zero to them, why don't you tell us what you're really trying to do?

Yeowtch! Feels like I drew back a stump on that one! I wasn't looking for a dd commandline to clear an HD. I'm simply trying to learn something here --

Peace, friend...

From the OP:

As background, I discovered this behavior quite some time ago while looking for a way to clear the data content from multiple partitions in turn using dd. I found the effects to be quite uniform and harmless in this distro context; which I just reported here.

Then, recently, having seen the post on Stack Exchange which pointed at some formulation of a dd commandline that was reported to have successfully wiped at least one drive using /dev/sda* , I decided to formulate my question for the unix.com communities.

In any event, it seems as though we're just writing to a space in RAM (in this particular context); as calling ls -l on a test case before and after a reboot shows no lasting presence of the created sda* entity. Indeed, a call to top confirms that we're simply draining memory away (used, free, buff, cached):

Before 722040 1349280 88880 500312 (approximate)
>After 1731636 345722 88880 1503648 (approximate)

So, if I might, this begs the next question: What is the significance of the uniform 1.0 GB limit on this action? There's certainly more RAM on-hand in any case.

Still just trying to learn something new here :wink:

Thanks again; and have a great day --