New network bandwidth requirements

Many papers, articles and posts about network bandwidth requirements refer to network traffic rules-of-thumb when estimating bandwidth requirements for a new network with an unknown load. I've seen a couple rules for video and VoIP.

I'd be thankful if someone could share any rules-of thumb they know of.

Thanks!!!

More than enough beats less? Never overlook compressed tunnels in a pinch. Unknown in means unknown out?

It's certainly easier to downgrade than upgrade.

If you have too much you can measure how much less you could do with, but if you have too little, how much too little gets buried in backed up and even dropped packets. However, the usual rules about service length and service arrival frequency versus number of servers, at a very high level, applies. Everything is a server at a high level: a CPU, an ethernet link, a disk controller, a disk drive, a fiber link. With one server, it is very easy for backlog to occur even if the averages are not violated. Think snake that swallowed a basketball. As you approach critical loading, you will see backlog peaks start occurring due to periodic bad luck. With one server, like one line at the bank, if you can remember when people walked to the bank, one customer with big needs makes a huge line that lasts for hours.

ever heard of the new highway? some people built a new highway because the old road was getting too small for the amount of traffic on it. so everybody was happy when the new highway was done. while the highway was still new, everybody said it was great that they had that much space to run and cruise in it. so everybody's friends and families soon started driving on the new highway until eventually everybody was thinking the highway was now too small and they needed another new highway ...

moral of the story? if you do not have enough, you will soon be looking for some more. if you have more than enough, people will start filling it up sooner or later.

as i almost always had to guesstimate things because of the lack of access to network metrics, i look at current load and whether that load has increased over the years as well as whether there are plans to put in more network-hungry apps/devices (i.e., video conferencing). if current load has been increasing, i look at the amount and rate of the increase.

as an example: 10 users at a remote site using 5 applications at the corporate site at year 1 and 50 users (max capacity) at the remote site using same 5 applications at the corporate site at year 5 -- only the user base increased. minus any planned implementation of network-hungry apps/devices and no complaints of performance hits even with the increase in the user load, i will not be looking to upgrade within the next 2 years unless the company just wanted to spend money just because.

for a site with an unknown load, look at the kind and size of the population the site most likely will be hosting as well as the apps/devices they will be using and the planned infrastructure physically located at the site. if you have no other site that is currently comparable to the new site, check how many seats are planned in the space as well as how big is the data center/room and correlate that with the business and/or group that will be located there (i.e., software development, banking back-office, internet research, etc.). decide the approximate load and plan accordingly. if you are still at a loss and still cannot decide, ask the project manager how much they have budgeted for the whole project and just reasonably ask for as much as they can give you.

Parkinson's law: need expands to consume excess resources.

Companies like Akamai make a good living ensuring your static web hits are filled from relatively local cache servers.

Good architectural design has to deal with:

  • having a soft saturation, so throughput goes up to saturation and then excess load is shedded in a least-lost-value basis, like newest clients lower in priority than older clients (deeper into transaction process).
  • avoiding negative saturation behaviors like overloaded Ethernet, which actually slows down due to collisions creating lost time on wire. Positive saturation behavior means the higher the overload, the more efficient the process. Requests can be sorted to they have higher locality of reference. Sometimes, requests for the same file can be mbone multicast as one. Sorting by disk position means shorter seeks.
  • flow control mechanisms allow services beyond capacity to be queued for eventual fulfillment, but service cancellation is quickly forearded to the server. The bad behaviors are thing like sending service requests every n seconds until a reply is received, consuming precious bandwidth and cluttering the server with cancelled, redundant, prior requests. Some routers can stifle keep-alive traffic, say from tcp connections of queued services, so they do not drag down net speed.