One StorageTek 6140 vs Two (2) 2540 's?

We're moving our production 3510FC to development, so we need to replace the production 3510FC. The 6140 is nice, but pricey.

Here's the question: Would two 2540's (RAID 10) be 'faster' than one 3510? (RAID 10) and faster than one 6140 (RAID 10)?

One StoregeTek 6140 (16x72GB 15k rpm fc-al RAID 10) vs. Two StoregeTek 2540 's (24x72GB 15k rpm SAS RAID 10)

While the 2540 uses SAS drives vs the 6140 FC-AL drives, the specs on the drives are nearly identical as far as average latency etc as listed on seagate.com (SAS = ST373455SS ) (FC-AL = ST373455FC )

Oh yeah, we're an Oracle shop, so my assumption is that, with drives & HBA being equal, the more spindles the better.

Hey,

the 2540 will be faster than the 3510 and the 6140 is faster than the 2540.
Why?
The 2540 has SAS-Drives or SATA-Drives with 3 GB/s, only the host-connection
is 4 GB/s. The 6140 has FC-Disks with 4 GB/s. So on the 6140 you will be able
the full 4 GB transport to the disks. And the Cache-Controller of the 6140 is greater.
Here some trends from the test center of SUN/LSI:
2540 / ca. 100 KIOPs, ca. 600 MB/s
6140 / ca. 200 KIOPs, ca. 1000 MB/s

CU
lowbyte

Thanks for the information lowbyte :b:

Is it safe to assume then that two 2540's would perform 200 KIOPs, ca. 1200 MB/s ? I'm trying to compare the performance of two 2540's vs one 6140.

Two fully populated 2540's (24x73 15Krpm) is going to cost roughly $15k, while one fully populated 6140 price is around $27k.

Thanks again.

-bv

are you running single or dual path? that makes a difference too :wink:

aha. Thanks for the detailed question.
Dual path for the two 2540's via two HBA's :smiley: (SG-XPCIE2FC-QF4)
Single path for the one 6140 via one HBA :rolleyes: (SG-XPCIE2FC-QF4)

btw, I haven't had much luck asking these questions to several sun var's and sales engineers :eek: :confused: :eek:

If anyone has the best bang/buck config. please share.

Thanks again.
-bv

how can 2 slower boxes be faster as one faster box? are they linked to one box? or what is your plan to setup the two arrays to one or more volumes? what kind of volumemanager do you plan to use and what app is running?

They can easily be, and I work with an application in which 2 2540s are ~ 1.5 times faster than a single 6140, but as you indicated the volume manager and volume layout are key. If using ZFS it is possible to become spindle bound, rather than cache or bus bound, so the extra 8 spindles the 2 2540s give are an advantage.

What I would recommend is that you ask Sun for both from their loaner pool and do some benchmarking with your application on both an choose whichever is better for you.

Hey all,

another point for the discussion is how many controllers in the storage you plan.
Both 2540 and 6140 can be ordered with one or two controllers, if you use two
controllers in the 2540 you may be faster than a 6140 with one controller. Otherwise
the 6140 will be faster. The internal design of the boxes differs a little bit, cause the
2540 based on SAS-Disks, there is no second line to the disk backplane, so the
second controller is only for failover, multipathing will not speed up the box.
The 6140 has two pathes to the disk backplane, so every controller has its own
path, this will increase the speed by multipathing.

A 2540 with a 2501 expansion tray will not double the performance, the IOPs and
MB thrugh is limited by the controller.
Its a very good idea, get the boxes via try and buy and test yourself.

Happy testing
lowbyte

Thank you everyone.

I have reached out to sun again, and will use the try & buy process as well. I've never had much success with the loaner pool.

We're going to test two 2540's, not (one + 1 expansion shelf), but two seperate 2540's each with it's own controller and HBA to the host. The 2540's will each have 1 x 512 MB Cache FC HW RAID Controller, and 12x73gb 15k sas drives... everything's going to be running RAID 10

The 6140 will have 2 controllers, for a total cache of 2GB, 12x73gb 15k fc-al
one HBA to the host... i'm sure we'll try it with two HBA's too..

The host will be a X4150, with 8 cores, 8gb ram, and 8x72GB 15Krpm sas drives too.

We'll play around with a bunch of configs and see what happens.

I think we'll be maxing out all the pic-e slots ..

I will do my best to post some results here. It may be a while. The try&buy process can take a while at times.

-bv

Just curious if you've had a chance to do your testing yet...

As a side note, Sun's loaner pool has essentially dried up. I attended one of the "Blackbox" presentations that Brian Wilson put on and asked him about it. He told me that Sun's "bean counters" decided it was losing money, so they cut the program. However, he did basically say to feel free to use and abuse the try and buy program because that's what it's there for...

Back on topic, we've been using a lot of 2540's but may have a need for the 6140's soon, so I'm interested in your results as well.

Hi,
Here's a quick update.
The project: migrate our Oracle E-Business Suite, dual node, (apps: 11.5.10.2 db: 10GR2) from Sun Sparc (2xv440 @4xUSIIIi1.28Ghz,8GB memory, 3510FC 12x73gb raid10 15krpm single path direct attach) to Linux, x86.
.
The goal: more bang for less buck. This was the same goal we had 4yrs ago when we replaced our Sun big iron dual E3500's+A1000 with two v440+3510FC. The payback on that decision was 18months.
.
Here's our progress: We've racked up two try & buy x4150's (2xQUADx3.16Ghz), and a 6140 (12x73gbx15krpm), direct attached, single path.
.
So far, the database has been ported to linux (RHEL5.2 64bit). Porting the apps is still in progress.
.
The x4150's are screaming fast compared to the v440's. After loading the OS a simple perl benchmark showed the x4150's running ~ 8x faster than the v440's.
.
After we fired up the database, our benchmark queries are running 10x faster. no 'tuning'. OMG. We were hoping for around a 2x performance boost. eek! :slight_smile:
.
We met with some sun reps today. We're going to get two 2450's too and see what if any config might 'beat' the 6140 :slight_smile:
.
BTW, racking and configuring the x4150's, and the 6140 was a breeze even for a stricktly solaris dude with zero rhel or StorageTek CAM experience. Take the time to read docs.sun.com.
.
More later.
-bv

Thanks for the update. The "wow this is fast" experience has happened to us a few times with x86 boxes. However, keep in mind, that the Sparc architecture is really aimed at multi-user multi-threaded apps. The old adage is still true: "If you put 5 users on a Sparc system, it'll run slow, but if you put 500 users on a Sparc system, it'll run slow." My current favorite is the x4450. I really love that box. Yeah, with one single threaded app, the x4150 seems to run just as fast (if not a touch faster) than the fastest x4450, but for what we do (Intersystems' Cach�), we seem to get better results with x86 than Sparc - at least at things that really bog down the processor.

BUT, here's the thing: I put in a couple of x4450's (4x 2.4GHz quad core) not too long before a couple of m5000's. Each had 4 quad core procs (but obviously they're really not in the same league in price.) One of our setup steps is to recompile a bunch of classes in a couple databases. Now, this process brought a pair of 4x dual core Dell boxes running Win2003 to their knees (100% CPU) for about two minutes (the boxes are about 6 months older - and don't get me wrong, they've been great boxes. They replaced two v880's and the guys running them love 'em.) When I got to that step, the x4450's were running at about 30% usage in prstat and finished in roughly the same time frame if not faster. There was no comparison with the Dells really. Nonetheless, doing the same (actually double the number of db's and hence double the workload) on an m5000 finished in about the same amount of time as well (possibly 5% longer). However, the m5000's didn't even break a sweat. Try as I might, I don't think I saw prstat rise above 18% - and that was only momentarily. I could have run this in the middle of the production day and not one of the hundreds of users connected to the server would have been the wiser that it had happened.

I don't think Sparc is dead by any means. Expensive? Certainly - but it always has been. It's just targeting a different demographic. Those that want to have hundreds (or more depending on the task) of users connected and have the utmost in stability are probably better off with Sparc. I'm actually installing some older v490's this week and even with the slower 1.05GHz procs, they're still pretty good boxes in most regards. And although the 3510's I'm using with them seem slower than 2540's, I don't like CAM and the fact that you can't have volumes/LUN's over 2TB with the 2540's and 6140's. I miss the flexibility of the 3x10's

Anyway, back to the topic at hand. I still love the 2540's, but we did end up spec'ing 6140's (and CSM200's) for another bigger than normal project. I'll report back if we end up getting them and I have a chance to run some tests with them. I'm still guessing we'll see a difference in performance, but that it will be really hard for us to tell how much performance increase without really loading down the system with user connections.

That limitation no longer exists on the 6140/6540, starting with firmware v7.10.n.n that was available July 2008. Supposedly the next major version of 2540 firmware (due late 2008) will also remove the 2TB limitation.