Separated hard disks for Operating System and applications - better IO performance ?

Dear all,

I would like to ask if there are any positive effects from having a dedicated hard disk for the operating system.

The scenario would be to have a dedicated disk for the OS and a dedicated disk for the applications.

Do you see any advantages in such a configuration such as better IO performance ?
If the applications use primarily ""disk 2" for IO would that have a noticeable positive effect?

Thanks in advance,
Petros

I don't think it matters on most applications.

If you describe the details of your application, people can give a more appropriate response.

1 Like

I set up systems with the OS on drive 0 and the application and data on drive 1.
Even with RAID, I set up logical drive 0 with the first two spindles, and then use the rest for data.
The biggest advantage is ease of creating a standby machine.
Performance will depend on how well you can balance the IO between disks.

1 Like

Thanks for your replies,

This is not for a specific application in mind.
I want to see if this is a best-practice I have missed, or if this does not
really have an effect in applications that do not have demanding disk I/O requirements.

To me it would sound logical if an application has specific performance requirements
and heavy disk I/O, a busy database for example, to have its own disk.

I see the advantage in having the OS / Applications in different physical disks, from the management perspective (creating a standby machine, swapping disks for the application, ...)

but does the OS on its own really need a significant amount of I/O access to the disk? I would guess not.

If there's significant competition, this would make sense. If there's significant competition, though. That's a big "if".

Generally not.

1 Like

I just checked a system, that has 43 signed on users running a call centre application.

HP Proliant, Drive 0 = 2 disks RAID1                                           
             Drive 1 = 6 disks RAID10                                          
                                                                               
10:45:49 device             MB     %busy   avque   r+w/s  blks/s   avwait  avserv 
Average  c0b0t0d0p1s2    29966        5      8.6     13      155      32.5     4.3 
Average  c0b0t0d0p1      34699        5      8.6     13      155      32.5     4.3 
Average  c0b0t1d0p1s0    19531        0      1.4      0        0       2.1     5.7 
Average  c0b0t1d0p1s1    39062        2      2.4      7       93       5.2     3.6 
Average  c0b0t1d0p1s2   151323        2      2.5      5      102       5.9     4.0 
Average  c0b0t1d0p1     209924        3      3.1     13      196       6.3     3.0 

Surprisingly the root file system is the busiest.

This is indeed interesting.
Does the application strictly use disk 1 for its disk I/O ?

There seems to be lots of activity on disk 0 which is directly or indirectly caused by the application.

In the case it does it indicetly, it would be interesting to see where this disk I/O originates.

Areas worth checking:

Is the application using Operating System work space for temporary files? A badly-written application might use /tmp for example.

Are you gathering too many server statistics? I've seen busy server grind to a halt when it was set to run unix Process Accounting.

Does the application make heavy use of unix services such as email and printing?

I have seen perfectly satisfactory Oracle systems running under unix on a single RAID array with no dedicated system disc. What really mattered was tuning the Oracle SGA and PGA.

I do separate OS (all LVM) mirror for rootvg/vg00 with only OS and preferably on separate controllers, and the rest (data and appl...) on RAID5 on SAN etc.
The reason is I have no test box.. and when it come to upgrade/sensible patching , can break the mirror and "rollback" if needed, without having to change anything else...
Correlation on I/O would be difficult with this architecture. but I have seen big boxes suffer (multiple oracle apps...) because of /tmp /var/tmp not purged enough ( and so it is not the volume but how many files present here that can ruin performance...)

In reality, it rarely matters on most server applications.

Really.

In fact, more often than not when people partition like that, thinking they are going to gain performance or something else, they find the partition they made was too small and they have other problems down the road.

I generally never it anymore when setting up disk partitions. It simply is not necessary for most applications, but that is just my opinion.

---------- Post updated at 17:00 ---------- Previous update was at 16:58 ----------

And when tmp is full you can have other problems like applications crash and don't work because they can't write to tmp.

So, I don't think there is a good idea in making tmp small so it cannot grow large. It needs to grow and with very simple monitoring, it can easily be managed as far as size.