By William Van Winkle
[ STORAGE ]
We can dispense with the tired preamble about serial disk storage replacing parallel (PATA and SCSI). All that matters for servers today is which drive type makes more sense: enterprise SATA or Serial-Attached SCSI (SAS)? The gap between the two became much narrower with the arrival of “SATA II,” which added staggered spindle spin-up and native command queuing, both of which are SCSI features. Better yet for servers, “SATA II” features backplane interconnect and port multiplier support. Both of these come in handy when doing hot-swap and mass storage (JBOD) implementations. Note, however, that port multipliers seem to be less of a concern now that affordable SAS expander solutions, which work just fine with SATA drives, are coming to market.
The difference between regular SATA and enterprise SATA is most easily seen when you compare Seagate’s latest Barracuda 7200.11 desktop drive against the Barracuda ES.2 enterprise SATA series. Both support an external transfer rate of 3.0 Gb/s and, more importantly, a maximum sustained transfer rate of 105 MB/s. Average latencies and spindle speeds are identical. The differences are in the reliability specs. For example, the Barracuda sports a mean time between failure (MTBF) rating of 750,000 hours versus the ES.2’s 1.2 million. The nonrecoverable read error spec on the 7200.11 is 1 per 10E14 versus the ES.2’s 10E15, which looks petty but is in fact a whole magnitude of difference. The upshot is that enterprise SATA drives deliver the same performance and capacities as their desktop counterparts, but their value lies in their greater assurance that data will be written, stored, and retrieved safely. Particularly in businesses with legal requirements for data protection, enterprise-class drives are a must.
Interestingly, the Barracuda ES.2 also comes in a SAS version. This may seem odd when you compare against the new Seagate Cheetah 15K.6, a SAS (or Fibre Channel) unit with a 15,000 RPM spin speed and up to a 164 MB/s sustained transfer rate. The 15K.6 sports a 1.6 million-hour MTBF, an error rate of 1 per 10E16, and a maximum capacity of 450GB. Clearly, Cheetah buyers take a hit on capacity, but this is where we enter into the discussion of servers using tiered storage.
Few if any businesses are going to want 15K SAS drives for long-term, nearline storage. Instead, in cases where a server must have very fast, ultradependable performance for current business—a retailer’s large Web commerce site, for example—this format makes sense. High-speed SAS is well-suited to high-demand, high-bandwidth transactional situations. After a month or so, when that data is less likely to be needed on a constant basis but must still be protected and kept close to hand, using enterprise SATA or a low-end SAS drive like the ES.2 for “nearline” storage makes a lot more price-per-gigabyte sense, especially when implemented.
Some sources might try to paint the SAS vs. SATA battle as one of transactional vs. nearline usage, but this can be misleading. In many cases, the issue has more to do with the bandwidth load placed on the drives, not the drive’s ultimate dependability. An enterprise SATA drive could be an ideal server solution for transactional, primary storage provided the load placed on the drives is moderate and/ or there is no tiered strategy in place. As Western Digital enterprise marketing director Hubbert Smith once commented to U.K. Web publication Techworld: “Characterising SATA drives as non-mission critical is an uninformed statement. WD Enterprise SATA drives have a 1.2 million hours MTTF rating, exactly the same rating as Fujitsu’s latest SAS enterprise drive, the MAV2073. And addressing high-capacity markets with 10K and 15K drives is about as satisfactory as eating with a spork.”
[ CHASSIS ]
Cases may be the least-discussed primary server component, but they’re every bit as important as the others for two reasons. First, the specifics of the case determine the server’s storage capacity. Second, the case’s ability to handle airflow and thermals can directly impact the system’s performance and longevity. These two points sound like consumer concerns—and they are. The difference is in the details. Let’s examine two examples from Antec to make the comparison. Storage capacity seems a pretty straightforward business. Just count the number of drive bays, right? With Antec’s high-end gamer box, the Nine Hundred, there are nine drive bays, all of which can be external 5.25” bays or configured for up to three external 5.25” and six internal 3.5” bays. With Antec’s Titan 650 server case, there are four external 5.25” bays and six internal 3.5” bays, period. Server operators aren’t gamers. They only change their hardware configurations when essential, not because it’s a fun hobby. Alongside airflow and thermals, we might add noise to our second point. Antec uses rubber grommets in its Nine Hundred to dampen drive vibration. The Titan 650 doesn’t use grommets. Instead, it uses 1.0mm cold rolled steel construction. This is why the Titan 650 has a net weight of 30.0 pounds compared to the Nine Hundred’s 18.5 pounds. A well-built, heavily constructed case takes care of vibration naturally. Moreover, while the Titan 650 places a three-speed 120mm fan in the rear and provides for two optional 92mm fans in front of the hard drives, the Nine Hundred is practically blanketed in big case fans pocked with bright blue LEDs. The Titan 650 doesn’t need to cope with overclocked CPUs, GPUs, northbridges and all the other idiosyncrasies of a gamer audience; it merely has to move a constant amount of air quietly and dependably. Also note that, unlike the consumer tower, the Titan 650 is deeper in order to fit extended ATX and 12” x 13” CEB form factors. “Look into server-class chassis and you’ll find that the cost differences between the desktop chassis and the server chassis are not so much as you’d think, unless you’re talking about a really skimpy desktop chassis,” says Intel’s Brian Jarvis. “Also, the server chassis gives you a level of quality you do not get in a desktop box. The amount of validation and engineering we put into it makes it more failsafe, more reliable. You get a true server-class product when you go with a server-class chassis for your serverclass motherboard.” Obviously, there are different levels of “server-class.” Antec’s Titan 650 includes a TruePower Trio 650 Watt power supply and lists for $209.95. When fully decked out with all of the options, Intel’s SC5400 pedestal chassis sells for over $600. A 200% price jump may seem astronomical, but the two SKUs exist in almost opposite market segments. Antec’s case may be appropriate in small offices, perhaps even SOHO environments, but Intel tailors specifically to admins bent on eliminating downtime.
For example, SC5400 buyers can opt for a six-disk fixed drive cage or a different cage enabling drive hot-swappability, including via a SAS/SATA expander tied to an optional drive backplane. There’s an optional cable management arm as well as optional rails for converting the pedestal into a 5U rackmount format. The SC5400 can integrate a rear blower and shroud module that sucks hot air straight from the dual-processor heatsinks. Moreover, the four fans that sit behind the hard drive cage are arranged in a 2x2 formation. This means that if one fan fails, the one next to it is still moving air, and when all four fans are working, all can operate at a slower speed for lower noise output. The SC5400 can be configured with fully redundant 830W power supplies, each with two fans. (According to Intel’s Jarvis, the attach rate for the second PSU is over 75 percent.) Server power supplies tend to emphasize reliability and redundancy over maximum wattage. Again, you don’t have business servers running with quad-GPU arrays and overclocked system buses. There’s no race in the server market to surpass the 1,000W mark. Instead, you have admins trying to figure out how to minimize their downtime risks and increase ease of management. You’re also likely to find more emphasis on power efficiency. Intel server power supplies tend to hit around 80% effeciency; Supermicro now frequently edges over 90% under normal conditions. Needless to say, the desktop systems we mentioned early on in this article take no account of such factors, but given the 24x7 nature of server operation and the corresponding power draw, you should be careful to make power part of your ROI proposition.
When comparing the Titan 650 to a gamer tower, you have to look closely for the subtle variances. With a mainstream to high-end server case, the differences are glaring. Some customers may not need that extra level of commercial scalability and IT-friendly serviceability, but many will, especially those companies with growing employee counts and increasing emphasis on mission-critical apps.
[ FORM FACTORS ]
That wraps up our discussion on the key points separating core server hardware from desktop components and why true server hardware is a no-brainer for businesses with real business needs, but we’re not quite ready to abandon the topic of cases. In companies with, say, 10 or more servers, the choice to go with rackmounting seems obvious. Otherwise, the pedestal sprawl becomes too unmanageable.
“Obviously, there’s nothing wrong with having a few pedestals in a machine room somewhere,” says SYNNEX’s Doug Bone. “Generally, as end customers scale, they need more systems. They start adding more and more units. Pretty soon, you have a hard time getting physical access to them all. The wiring becomes a mess. The power cords, the cooling become sub-optimized. Using rack infrastructure is generally a good idea because it helps in a number of those areas. Your servers are all in the same place, so the wiring is much simpler. The airflow is more optimized. It’s just a better idea. If you ask companies that have gradually amassed 15 or 20 pedestals if they could go back and put everything into a 1U form factor in a rack, I bet you’d find a lot of people would do that in a heartbeat.”
[ PEDESTAL OR RACK? ]
Sensible or not, it’s a fair guess that a company buying its first server is unlikely to buy a rackmount form factor. Just taking the first step into rackmounting is likely to cost $2,000 or more—possibly as much as the server itself. Between that first server and the tenth, though, there needs to be some serious discussion. The conversation should probably start with some consideration of the target application(s) and system configuration.
“With a single Xeon processor and a lower cost server board, a pedestal seems to be more cost effective than rackmount,” notes Sam Sanchez, vice president of marketing at Coastline Micro. “Because of the size, you can put in larger, quieter fans. They just seem to be preferred by a lot of businesses. 2P pedestals are more used for CAD/ CAM, engineering companies—more of a workstation than a server. Especially in a small business, servers tend to be more for things like printing and Outlook and whatnot that fit a multi-core, 1P environment.” Pedestals may be more cost effective in smaller environments on a price-per-MIP basis, but there can be other factors to consider. Servicing a pedestal may seem to be quicker and easier than putting a screwdriver to a rackmount server, but that may also depend on everything from the amount of cabling being managed by the rack to the number of hot-swap features implemented in the server. For instance, we were impressed with Intel’s internal fan design in the 2U height SSR212MC2 storage server. If one of the 10 fans fails, the technician need only pluck out the dead unit and pop another one onto the backplanelike PCB under the fan array. Management software alerts admins about the failure, LEDs indicate if the operation was successful, and there are no little, easily breakable cable leads to hassle with during servicing. You’ll see this more in rackmount servers than pedestals.
Also consider that a company doesn’t have to jump into a full-sized 42U, four-post rack. Admittedly, a bare 42U, 30” deep rack—just a sturdy top and bottom square with for posts—can be had for well under $500, and that’s not a bad way to go with customers who see a future in rackmounting but are cash-strapped today. A model with enclosure walls, locking front and rear doors, and another few inches in depth will nearly double that price. But not all racks have to be 42U high. Belkin’s 24U Premium Enclosure lists at $1,199 but is 42” deep, features casters and leveling feet, has locking front and back doors as well as locking side panels, and is built like a tank. Unlike with a 42” rack, because Belkin’s 24U stands only chest-high, admins can place a keyboard, mouse, and monitor on top of the enclosure, thus saving on either desktop space or an expensive rackmount KVM control console. If 24U is still too large, Belkin even makes a 13U Mini Enclosure—essentially a “mini me” of the 24U model.
One reason why rackmounting is so popular is that it gathers and consolidates a company’s central IT resources into one place and organizes them more efficiently. One 24-port rackmount LAN switch can often replace two or three separately managed switches or routers installed piecemeal previously. Racks can also consolidate power draw for all these server and network resources, but there is often a balancing price to pay in electric infrastructure. “You can plug everything in a rack into a single power distribution unit,” says SYNNEX’s Bone. “That also means you need appropriate power coming into your rack. One of the things that makes racks a little less attractive is that there’s some learning customers need to do. If they’re not pulling in the right wiring, such as multiple 110V circuits or 208V, they may not be able to fully utilize their rack infrastructure."
[ THE 1U PROPOSITION ]
A discussion about what type of rack to buy in some ways puts the cart before the horse. The customer still needs to decide what size of rackmount server chassis to buy. 1U? 5U? Something in between? There aren’t many hard and fast rules here, but there is plenty of advice and food for thought.
“As the technology gets cheaper and cheaper, a lot of people use 1Us almost like an appliance,” says Peter Chen, director of business development for servers at ASI. “They have racks of 1Us, and if one dies, they just unplug it, plug in a new one, and that’s the diagnosis method. You could say that pedestals aren’t trendy, that maybe the 1U and 2U business is being driven by people like Rackable. But think about what you can do with a pedestal now. You can have effectively eight processors in there. What SMB needs more than eight CPUs? That’s pretty scalable right there. It wasn’t too long ago that we were thinking two cores was good enough for a small business.”
Pedestals don’t have a monopoly on running eight cores, though. Plenty of 1U systems now have 2P capacity. Supermicro’s basic 1U SuperServer 6015 family, for instance, takes up to two quad-core Xeon processors per chassis while still accommodating up to 64GB of RAM, dual-port Gigabit Ethernet, up to four hot-swap hard drives, Supermicro’s Universal I/O slot, and more. In fact, the new 6015T (for “Twin”) models fit two 2P boards into a single 1U chassis, yielding up to four processors and 16 cores per pizza box. Chen is right in that a pedestal with eight cores is plenty scalable for most small business needs, things still evolve. The server that proves more than sufficient for 50 users may crumble under 250. Scalability always has to be considered from the outset.
Also consider build quality. These are thin boxes that need to hold up and be as tough after five years as they were after five days. Sam Sanchez of Coastline Micro told us of one instance when the end customer had a time window that had to be met, and the only available 1U hardware was from a budget-oriented import house. After a few weeks, these 1Us started visibly bowing in their racks. It was a terrible embarrassment for the reseller and a serious risk for the buyer. Ultimately, Coastline Micro helped to pull out all of the flimsy servers and replace them with Intel units that have held up much better. You know the kinds of quality points to look for. Is the sheet metal too thin? Are there sharp edges inside the chassis? Can you hear vibration or rattling? When dealing with barebones systems, is the cable management tight and tidy or dangerously haphazard? The lessons you’ve learned with shoddy desktops still apply here. Ultimately, 1U machines tend to fall into one of two camps. One may be as Chen describes above, with the system dedicated to a single application and designed for easy redundancy. Alternatively, some customers, particularly those with HPC applications, are simply after CPU density. They want the cheapest, most reliable mass of MIPS that can be crammed into the smallest possible space. The twin design mentioned above is an odd exception. Supermicro has its own SKUs, and Intel is making inroads with twin systems based on its X38ML motherboard, which is a half-width, uniprocessor solution based on the X38 chispet. Applications that thrive in 1P systems mesh well with twin system deployments. Twins are not wellsuited to general server apps and multitasking, so be very aware of the customer’s usage scenario before recommending twins simply for their density. Also note that twins carry a higher cost in administration; you’re caring for two systems per box instead of one. The extra management requires software tools that only become cost effective once at least five or six systems have been implemented. No less important is that you now have two servers dangling from one power supply, which poses its own risks and complications. According to Silicon Mechanics’ Ken Hotstetler, twins typically end up in stateless grid computing environments where you don’t care so much about individual systems.
| Back to top
Copyright © 2008 RAM Magazine. All rights reserved.
Do not duplicate or redistribute in any form.