By William Van Winkle
When talking with resellers, we’re struck again and again by three facts: 1) Many resellers know less about server fundamentals than they let on. You can’t go out and pitch SAN s when you don’t understand RA IDs. 2) Perhaps it’s because of this knowledge gap that many resellers confess to preferring big brand servers rather than creating their own solutions. 3) Desktop PCs are widely sold to small businesses, where they’re expected to perform as servers. In a two-part series, we’re going to go back to basics with servers. It’s time to explore the foundations of what constitutes “server-class” and dissect the components, options, and sales angles behind SMB servers. By the time we’re done, you’re going to see that OE M boxes, while a valid option with their own merits, may not always be the smartest go-to product. Better yet, you’re going to walk away with plenty of ideas on how to differentiate your servers, add value, and win more business.
IDC’S WORLDWIDE QUARTERLY SERVER TRACKER
summary for the third quarter of 2007 shows 8.1% yearover-
year growth in “volume” servers, meaning those
costing under $25,000. IDC shows that only 13.4% of the
server market exists beyond the top five OEMs. That could
be interpreted as bad news...or as an opportunity.
“Concerns about the economy, particularly in the U.S.,
are causing customers to re-think their infrastructure needs
at the same time that new levels of compute and power
densities are expanding power and cooling challenges and
driving different IT infrastructure acquisition patterns in the
market,” noted Matt Eastwood, group vice president of
Enterprise Platforms at IDC, in a report statement. “IDC
believes that we are in the early stages of a market-wide
transition, which will require significant IT investment in a
more flexible IT fabric.”
So we have quickly changing product models based on compute and energy demands combined with mounting pressure for lower-cost solutions. On the surface, those IDC numbers may sound pretty uninspiring. (Gartner’s Q3 numbers reported almost the exact same things.) But the fact that IBM, with its old-school approach to server products, is losing share so quickly (down 8.5% year-over-year) tells us that those who can embrace newer models quickly stand to win ground. To us, 2008 sounds like a perfect chance for resellers to jump in with superior alternatives to what the tier-ones are pitching. The smaller the end-user business, the better your odds of success, both because this is the segment with top growth and your local touch and educational influence will carry more weight than a faceless OEM's Web site.
Whether servers are already part of your offering mix or you’re mostly new to the space, this is a rapidly expanding product category rife with opportunity and applicable to businesses of all sizes, even fledgling garage operations. However, you need to understand the options available within the SMB server category before you can recommend the right specific configuration for your client and ace out the competition. So from entry-level to volume racks, we’re going to breeze through the field and make sure you’re up to speed on the essential hardware options.
[ FROM DESKTOP TO SERVER ]
Imagine a small business of, say, 15 or 20 employees in need of a Microsoft Small Business Server machine. According to Microsoft, the system requirements recommend a 1 GHz or faster CPU and 1GB or more of memory. No worries. Any modern machine can do that. With the requirements covered, budget concerns kick in. Golly, here’s an HP ProLiant ML 110 G5 starting at only $559—sweet!
Hmm. A little clicking around shows that the only way to get HP’s $559 price is to ditch having a service agreement, which isn’t such a hot plan when your business depends on the machine. The system includes only a single 80GB desktop- class SATA drive, so there’s not only low storage capac- ity but also zilch for hard disk data protection. Predictably, there’s no operating system, but HP does take the smart step of using ECC memory rather than standard non-ECC DDR2 modules. However, there’s only one 512MB module, the absolute minimum allowed for SBS, installed for $559. The kicker is HP’s inclusion of the single-core “Conroe-L” Celeron 420 processor, which is about the slowest, lowestfeatured thing Intel manufactures with the Core microarchitecture. Celeron is great for data appliances, such as a NAS box, but it’s a lackluster foundation for one’s key business operations.
From here, you know the drill. One can upsell the Pro- Liant into a fair server, but doing so requires many more hundreds of dollars. Even with a fair configuration, this machine will still only be a “desktop on its side,” the common industry phrase that denotes taking a desktop built for desktop apps, putting it on the floor, and using it for server apps. The approach can work, but it’s a bit like asking a kid with his freshly printed driver’s license to step into a NASCAR race. He can probably get around the track, but given enough time, it’s a fair bet that all too soon disaster will follow.
What’s wrong with running a desktop as a server? Sometimes nothing. If an office of 10 people want to turn an old Pentium 4 box into a print server, no one’s going to cry if a job takes an extra two or three seconds. (Unless that old hardware, built to consumer-level quality standards, dies and leaves the whole office unable to print, of course that would be a problem.)
Sometimes, the difference between desktop and server hardware is paramount. In the context of hardware, a “server” is a computer built to run server applications, often with a heavy workload placed on key components. Servers are expected to run for extended hours, usually non-stop for years, and require little handson maintenance. Nobody expects this from desktop products. Even hardcore gaming rigs, despite occasionally being pushed under maximum load, aren’t expected to perform under such conditons on a 24x7 basis
“If I put myself in the shoes of a reseller who’s done a lot of desktop business,” says Intel channel manager Brian Jarvis, “I’m likely to have customers come in and start asking for server solutions. I’m comfortable with the desktop space, so maybe I sell them a desktop solution as a server. It may work, but I may also be shortchanging that customer. But I don’t know the ins and outs of servers. I don’t know backplanes. I don’t know where to start here. If I’m that reseller, I go to a rock solid vendor I can trust and who can support me. I think Intel is an excellent solution for resellers who want to get into that space and not take a risk. We look at that desktop-on-its-side business as a tremendous channel opportunity to get beyond that commoditized desktop business and into the server world.
To get a better understanding of a server’s building blocks and what constitutes “server-quality,” let’s take a quick spin through the primary components and see what separates these two hardware classes.
[ PROCESSORS AND CORES ]
As you probably know, not much separates desktop and server CPUs these days. The same processor microarchitectures are used on both sides of the fence. This holds true for both Intel and AMD. The primary difference is whether the chip must stand by itself in a uni-processor (1P) environment or can collaborate with identical versions of itself in a dual-processor (2P) or multi-processor (typically 4P) environment. Examples in the 1P server space include Intel’s quad-core X3200 series and AMD’s Opteron 1200 line, both of which have direct counterparts in the Core 2 and Athlon 64 X2 families, respectively.
According to Intel, roughly 40% of today’s whitebox servers are 1P systems, although many of these are desktops, not “true” servers. When a customer sees that a Xeon X3220 is the exact same chip as the Core 2 Quad Q6600, his first inclination may be to shrug and think, “It’s the same thing, and desktops are cheaper, so I’ll buy a desktop.” Even if the only difference between these chips is the labels stamped on their packaging, there are secondary factors to consider, such as support from both the processor vendor and ISVs. From the resellers’ perspective, there’s value in selling a server brand rather than a consumer brand, no matter if pricing for both cips is the same.
The majority of servers sold are 2P systems, which cannot run desktop processors. We know from speaking with SYNNEX that the distributor’s ratio of server processors to motherboards is a little over 2.5-to-1, meaning that most boards are going out with two processors attached. That said, just because a 2P motherboard has two sockets doesn’t mean the initial sale must sport two processors. A customer wanting a 1P server may be fine with hedging against future growth by spending more for a 2P motherboard and 2P-compatible processor yet still only making an initial deployment with the single processor needed for the job. This is especially valid when discussing quad-core CPU options.
“There are many application types that are not processor- sensitive, that don’t demand a dual-processor system,” says Ken Hotstetler, president of Silicon Mechanics, a Seattle- based server reseller. “For a small company that’s just going to need a machine for serving some flat Web pages and not be under a lot of load, a uniprocessor system makes a great deal of sense. In addition, there are highperformance computing work loads that benefit greatly from uniprocessor architectures. Not to get too granular, but that often depends on how well your application can take advantage of multiple cores inside of a single processor and then the memory access patterns that result from, say, Intel’s SMP [symmetrical multi-processing] architecture versus a uniprocessor model.” In any environment where load balancing comes into play and the problem can be solved by more than one physical unit or node, the economics of whether a singleor dual-processorCPU makes sense can be figured by analyzing how much work each processor is getting done. In a Web farm, for example, it may be that 30 uni-processor boxes will prove more economical than running 20 dual-processor systems. Some organizations might turn to virtualization to squeeze more efficiency out of those 20 2P chips, but virtualization costs money on several levels, and some applications don’t fit well with today’s virtualization platforms. Virtualization can be a strong selling angle, but only after you’re sure it fits with the client’s application set—a situation that can be less common than some vendors might have us believe.
“At this time, people probably do perceive virtualization as being overhyped,” says Silicon Micro’s Hotstetler. “We will end up in a massively virtualized world eventually, but the SMB space is traditionally not filled with early adopters for those kinds of more complex technologies. How and when? Those are the questions. I don’t think it’s now, but... soon.”
The physical difference between desktop and server processors is almost nil if we’re talking about 1P systems. Moving from a desktop CPU to a 2P server processor is a night-and-day affair, but more because the applications behind them are likely to be different, as are the customer’s expectations for how that machine will need to scale with his company’s growth. You might suppose that a client who bought a pair of dual-core 2P processors might only need a quad-core 1P setup today, but actual sales show this is rarely the case. Customers tend to approach 1P and 2P configurations as much with a mindset as with a hard and fast list of bandwidth requirements. Those who bought 2P systems in the past tend to run applications that require 2P-class resources, and they understand that applications don’t stand still. New releases grow and require more bandwidth. Essentially, those who bought 2P before will continue to buy 2P going forward, even though their software requirements may not yet have caught up to the increase in hardware capabilities.
A small part of the whitebox server market (typically less than 10 percent) serves the 4P/MP space. For this, AMD has its Opteron 8000 series and Intel its Xeon 7000 group. AMD recently updated its MP line to the new quad-core Barcelona design (chips with x3xx model names), and Intel recently introduced its Core microarchitecture into the dual- core Xeon 7200 and quad-core 7300 series. In general, the only significant MP opportunity for channel resellers is in high-performance computing (HPC), where processor core density is often the top priority. “We’re looking at some products where we’re going to have 4P inside of a pedestal chassis,” says Hotstetler. “That’s a low-end high performance computing play from our perspective. We’re looking to target researchers, lab workers, [people who are in] close proximity to expensive pieces of equipment where high-speed processing and data exchange are needed.”
While 4P pedestals may be a growing possibility for lower-load jobs, expect 1U/2U rackmount stacks and blade enclosures to be more common in the HPC market. Is it worth pursuing this growth space where whiteboxes have rarely gone before? Absolutely. According to IDC, lower entry costs, often starting around $10,000, are growing the low-end HPC market by leaps and bounds. In 2007, third-quarter processor shipments were up 59% from Q3 in 2006. Since 2003, HPC share of the server market has more than doubled, accounting for 26% of all server CPU shipments as of late 2006. IDC notes that “among the fastgrowing vertical segments are biosciences, geosciences (oil and gas), computer-aided engineering (CAE), electronic data automation (EDA), defense, and university research.”
[ MOTHERBOARD ]
Viewed at a distance, you might think that desktop and server boards are closely related. ATX form factor server boards are common in the 1P and compact category. You’ll see many of the same RAM slots, chip heatsinks, SATA connectors, backplane ports, and so on. However, closer examination will quickly reveal differences. With the 2P Intel S5400SF board, the form factor is 12” x 13”, not ATX’s 12” x 9.6 inches. There are 16 slots for a total of 64GB of system memory (desktop boards typically max out at four slots), and the memory type is FB-DIMM, Intel’s chosen format for high I/O servers. Under one of the heatsinks is an IPMI (Intelligent Platform Management Interface) 2.0 controller. This allows for remote monitoring and management of servers, much like what Intel facilitates with vPro in its desktop business boards, only IPMI places more emphasis on such things as temperature, fan, and voltage conditions—the factors that become more important in a system meant for 24x7 operation.
Server motherboards usually utilize server chipsets. AMD’s Opteron uses the same chipset from suppliers such as NVIDIA and ServerWorks to span from 1P to 8P designs whereas Intel uses a different chipset family for each of its 1P, 2P, and 4P CPU groupings. In the 1P world, you still run into desktop core logic ported over into commercial products, such as the X38 chipset planted on Intel’s X38ML 1P motherboard. But this is an odd exception because the X38ML uses a miniature form factor aimed at placing two motherboards in a 1U rackmount chassis. You’ll also have cases like Intel’s 1P-based 3200/3210 chipset, which bears a striking similarity to the company’s X38. The key differences between the two platforms come from how the chipset’s features are implemented in the motherboard’s BIOS. For instance, BIOSes for the X38 emphasize overclocking, a feature set that is almost universally scorned in the server world for stability and reliability reasons.
The big names have their own issues, of course, flexibility and cost being list-toppers. For instance, IBM’s system includes solid state hard drives, which may be a nifty idea when high-transaction access time is critical, but for everyone else it would be patently overkill. Again, this is a good example of enterprise tradition falling out of touch with an SMB target audience. If blades are really going to succeed with clients sporting 500 or fewer seats, more attention has to be paid to core functionality out of the box and higher levels of upgradeability. It’s OK to look like enterprise equipment and in many ways act like enterprise equipment, but the fine details need to show awareness of specific SMB needs.
With Supermicro’s OfficeBlade, you see exactly that. The OfficeBlade is a slightly modified version of Supermicro’s SuperBlade aimed at SMBs. The OfficeBlade emphasizes dual-processor motherboards, reducing the total core count. (However, quad-processor Opteron blades are available.) This combined with other factors, such as low-noise power supplies and exclusive use of up to eight DDR2 modules rather than FB-DIMMs, allows Supermicro to have the OfficeBlade output only 50 dB when fully loaded with 10 blades and six 2.5” drives in each node. In contrast, the BladeCenter S outputs 64 dB and the HP c3000 67 dB. Who cares? SMBs, naturally. Many small businesses don’t have environmentally controlled server rooms. These blade boxes have to coexist on racks, tables, or in nearby closets with people trying to work and concentrate. Systems that produce low noise without taking a performance hit become a much higher priority.
In general, server motherboards are designed more for reliability and higher performance than their desktop counterparts. This could mean higher PCB layer counts for better trace isolation and lower signal interference. Surfacemounted components, such as capacitors and voltage regulators, are higher-grade. Manufacturers dedicate more resources to the design, quality testing, and validation of server motherboards because they know that while a consumer may be satisfied with 99.0% uptime, a missioncritical server needs uptime measured in additional nines to the right of the decimal.
Server boards beyond the 1P space tend to offer additional features that assist high-bandwidth applications. In the old days, this meant PCI-X slots, and you still see many server boards with PCI-X slots sitting alongside PCI Express alternatives. Dual Gigabit Ethernet ports are standard server fare now. Also watch for proprietary expansion support. Tyan’s tiny “TARO” modules, which include Adaptec RAID storage controllers and remote management cards, are a good example of this.
“One thing that people don’t think about a lot is that desktop boards tend to have much shorter life cycles,” says Doug Bone, vice president of SYNNEX’s server group. “They tend to be end-of-lifed more quickly. One thing that’s important in the server space is knowing that you’ll have a product out there for a particular period of time. It’s not going to be revved or changed in a short time so you won’t be able to buy more of the same thing. If you have a contract for rolling out a large number of the same thing, maybe across many locations, for a long period of time, you need access to those parts. There’s no technical reason why servers need that capability more than desktops, but as a practical matter, server products tend to be more embedded in vendors’ road maps, which allows you to build solutions that stay unchanged for longer."
[ MEMORY ]
The DDR3 transition currently getting started in the desktop world has yet to make itself felt in servers. Instead, DDR2 remains the standard in new 1P server machines. We can’t say if there’s a significant quality difference between premium consumer DDR2 modules and their server variants, although it seems unlikely. Corsair makes statements such as: “Data lines are carefully engineered for noise immunity; clock lines are optimized for minimum skew. All modules use JEDEC-compliant six-layer, impedance-controlled printed circuit boards, with 30 micro-inches of selectively plated gold ensuring a proper interface with the DIMM socket.” We’re less inclined to place much weight on statements like these and rather pay more attention to which modules the motherboard manufacturers validate for their products after extensive lab testing.
Server platforms tend to implement support for registered and error correcting code (ECC) functionality, both of which are usually ignored by desktops. A “registered” module is one that buffers signals in order to improve signal integrity. This often incurs a latency penalty, but many server owners would rather emphasize absolute data reliability over a bit more speed. ECC also improves data reliability by using an extra byte lane to check for major data errors and correct minor ones. ECC and registration support are usually tied together, although they can be implemented separately.
Fully buffered DIMMs (FB-DIMMs) are much like DDR2 modules except that they implement an Advanced Memory Buffer (AMB), that big chip you see in the center of each module. Traditional DDR2 modules operate in parallel, with signal traces leading from each DIMM slot back to the memory controller. (The controller is in the northbridge on Intel platforms and in the CPU on AMD.) FB-DIMMs utilize serial operation. Rather than write straight from the memory controller to the DIMM, the memory controller writes to a given AMB in the serial memory module chain. The target AMB then addresses its module. Along the way, AMBs implement error correction. The end result of the AMB innovation is increased memory data width, although there are penalties in increased latency and power consumption. In 2007, FB-DIMM latencies came down to the CL3 and CL5 range, but power remains a problem. Simply put, the performance of FB-DIMMs is similar to DDR2, but DDR2 is cheaper and consumes far less energy.
Intel has been the chief proponent of FB-DIMM technology, and while AMD has had trouble getting Opteron chips out the door lately, the company has had no problem widely publicizing how much less energy its server platform consumes. The key ingredient in the Opteron platform’s power advantage is its use of DDR2 instead of FB-DIMM. Perhaps this is why Intel recently (and quietly) unveiled the 5100 chipset, its first 2P core logic since before the Bensley platform’s debut to use registered and ECC-compatible DDR2. The 5100 is identical in almost every way to the 5000 chipset family; only the memory controller has changed. So for those who want to sell Intel servers and still have a strong green story to tell, the 5100 and DDR2 have a persuasive plot.
The future of FB-DIMMs looks shaky. Despite the new Intel 5400 server chipset bumping frequency support up to 1600 MHz, the 2007 Intel Developer Forum revealed that memory manufacturers are not currently planning on bringing FB-DIMM technology to DDR3. AMD has never embraced FB-DIMM and removed the format from its road maps back in the fall of 2006.
| Back to top
Copyright © 2008 RAM Magazine. All rights reserved.
Do not duplicate or redistribute in any form.