Well, let's take a look at some situations where these things are used...
Shared memory avoids having to write data into and read it back out of the kernel, making it a blindingly fast way to share the same data with swarms of processes. Since it's not arbitrated by the kernel, it's got race conditions and pitfalls, so isn't as easy as it looks; when solving a complex enough problem you might find you're writing your own sockets from scratch instead of anything faster. You see it in situations with very demanding performance requirements, like high-performance audio or video interfaces(X11 drivers, XSHM video, DirectX). Linux's modern pthreads implementation builds mutexes and the like out of atomic operations on shared memory.
Named pipes are kind of an old-fashioned hack kept for portability reasons. Their behavior can be a bit obscure when dealing with more than one reader and/or writer. Occasionally handy in the shell to bridge unbridgables, otherwise I don't see them get much serious use.
UNIX domain sockets are very often used for local client/server interfaces because they're network-like(one server, multiple clients) without the overhead of loopback networking. Big things like X11 and MySQL servers serve clients with UNIX domain sockets when possible. Lots of less demanding system daemons and controllers(system loggers, linux udev, linux acpid, linux's wpa authentication manager) also use UNIX domain sockets for their convenience of network-like connect/disconnect without the complication of actual networking. They can't do any kind of sharing or broadcast sending.
pthreads is a threading implementation but often called (and used as) IPC anyway. Some implementations do allow seperate processes to share mutexes etc(the current NPTL linux implementation), some don't(linux's old linuxthreads implementation). Its features are tightly defined, fairly portable, and somewhat limited, mostly restricted to control mechanisms, not communication structures. By and large its overhead is quite low, but implementations of course vary. For simple control of threads it's difficult to beat.
System V IPC seems a bit overbuilt. Unlike POSIX thread primitives, this API is geared towards communications between unrelated processes, and frilled with so many features it's hard to imagine it not having significant overhead(most objects semi-persistent and given their own owner/group/attributes set, mtimes kept for many kinds of things, sometimes even last-user-modified). It has some interesting and difficult-to-implement features(grouping several semaphore operations atomically) which would be useful if implemented brilliantly, but can stall and starve if done badly, and implementations do vary. Message queues I'm unfortunately quite unfamiliar with.