How do I determine the best number to use for the bs (block size) operand of the dd command?

When I create a bootable Linux distro installation USB drive, I use this command:

sudo dd if=/Path/to/linux_distro.iso of=/dev/rdisk<disk number>
 bs=<number of bytes>

When I look it up, I've seen variations of people choosing 4M, and I think 8M, 2M, and maybe even 1M.

If I leave the operand off, it seems to take a long time (I never let it finish).

How would I determine what number to use to make it go the fastest? Is speed the only factor?

What instructions are you following? It makes little sense to dump a raw ISO to a partition, flash drive is not CDROM. A few odd boot systems do this, but usually you'd install to a flash drive like a hard drive.

4M is fine. The point is that it's not left at the default of 512, which would be extremely slow.

It may seem to take a long time because it may take a long time. USB media not particularly fast. If you want to know how much it has written and are running in Linux, not some other UNIX try running sudo killall -USR1 dd which should cause dd to print statistics. (On non-Linux operating systems, killall has a more literal meaning.)

There's a bunch of places online that detail how to make bootable USB drives for Linux distros like Ubuntu on Mac. I can't post a URL, because it says I need to have at least 5 posts. It seems to be the only way to do it from Mac's terminal.

I don't quite follow you here. I guess dd stands for dump something? All I knew is the command is able to make bootable media.

I appreciate the suggestion, but I'm looking for a bit more of an explanation. I want to be able to work out an optimal number by myself.

You said 512(M?) would be slow. What about 256? What about 1M? What about 8M? I've seen various suggestions, but not much explanation.

Hopefully that makes sense.

I've always thought it stood for "Disk to Disk" or "Disk Dump". The Wikipedia page for dd mentions the JCL language's DD command. I would say the origins of the name are lost in time.

Presumably as the original use of dd was for copying data between devices the block size is that of the target device. If you have the Gnu stat command on your system you could use that to determine the block size of your USB. I put an 8Gb USB drive and a 32Gb drive in my system and got this:

$ stat -f -c"%s" /media/apm/24F4-1B66 # 8GB volume
4096
$ stat -f -c"%s" /media/apm/F170-2ADC # 32GB volume
16384

In actual fact the man page for stat says:

       %s     block size (for faster transfers)

Andrew

The origin and history of dd vanish in the haze, proven by its unusual and unwieldy command line handling, but rumours are it stands for "data dump". Of course, this attempt of an explanation is as good as any other interpretation.

Trying to nail down an optimal block size for a sizeable copy operation might end up like nailing jelly to the wall. While it makes real sense not to use the default of 512 bytes as that would result in way too many I/O operations, choosing too large a value could make the situation worse as it e.g. could start thrashing the swap space, also influencing overall system performance. And this is system load dependent and can vary with time.
I'd recommend doing some tests with e.g. 1M, 2M, 4M, and stop where the improvement rate starts dropping.