When Less is More

Since Dell launched the PowerEdge server line 10 years ago, we have introduced servers in waves or “generations”, where each generation is defined by a common set of system technologies—common networking chips, RAID subsystems, etc.  The approach makes it easier for customers to adopt those technologies and has been well received, but concentrating change into narrow windows does emphasize those changes.  A couple of weeks ago, Dell introduced its 9G PowerEdge servers, an event that for me highlighted a series of changes—both for Dell and the industry in general.  On a personal level, it marked the end of four years of managing the server development team.  Since the introductions, I’ve moved on to lead platform engineering for all Dell products, along with my “two-in-a-box” partner Stuart Caffey.

From an industry perspective, we are seeing a real shift in historical trends. For years, system performance was one of the most important considerations for vendors and our customers, with power and thermal impacts to datacenters farther down the list of priorities.  Over the past 5 years we’ve seen huge performance increases, but power has increased as well.  Citing a Dell example—other companies are similar—back in 2000, our mainstream 2U server utilized a 330-watt power supply.  Just four years later, power requirements for a similar server more than doubled to 700 watts.  Rising energy costs and the increasingly complex issue of re-working datacenters have made power and cooling a priority for customers.  I’ve spoken with some customers who are contemplating building new datacenters that panicked when they looked at historical power curves and projected them into the future.    Some system companies have taken advantage of that panic and offered some “power solutions” of highly dubious value like introducing low-voltage (48V) DC power distribution into data centers—not a good idea for a lot of reasons.  The good news for the folks planning datacenters is that the industry has listened and power trends have hit an inflection point: the mainstream servers of 2006 will dissipate less power than their predecessors while delivering far more performance.

Back to the 2U example, the latest Dell 2U, the PowerEdge 2950 with the Intel 5100 series processor, delivers more than double the overall performance but requires 25% less power at maximum load.   The trend changed because the whole industry, from the system companies to the processor vendors, realized the problem a few years ago and shifted our thinking around to “power in the datacenter is a fixed, precious resource” from a “if we build it they will come, no matter what the power/cooling costs.” 

I don’t think we’ve seen the high water mark in power and say that future servers will always dissipate lower power.  Quad-core processors are coming in the next year and they will draw more power than today’s CPUs.  You can see the foreshadowing of quad core in the power supply ratings on some systems just introduced: the ratings are higher than needed for the current round of CPUs, memory, etc.  What we’re seeing is the end of power doubling—if power goes up it will likely be minimal going forward.  Dell, and likely others, will focus on reducing power and improving efficiency.  One focus will be on offsetting increases in one area with decreases in another.  A subtle example in this generation of Dell servers is the use of TCP/IP Offload Engines (TOE) in our embedded Ethernet network adaptors.  Dell made TOE standard on our mainstream servers in part because it can improve network performance, but mainly because it greatly reduces CPU utilization and hence power in many real-world applications. 

We’d like to hear your thoughts.  Stay tuned for more details from Dell on this topic.

Update:  Dell one2one reader David G. had a comment about versarails system.  In this vlog, Greg Henderson, an engineer from our server rack team, shows off the functionality.


View Video
Format: wmv
Duration:

Continue Reading
Would you like to read more like this?

Related Posts

Does Flash Storage Offer Data Security?

The economics of flash storage continues to improve, spurring more mainstream adoption. As prices drop and become affordable for more enterprises, flash technology meets two of the three objectives every … READ MORE

Bob F July 1, 2016
Click to Load More
All comments are moderated. Unrelated comments or requests for service will not be published, nor will any content deemed inappropriate, including but not limited to promotional and offensive comments. Please post your technical questions in the Support Forums or for customer service and technical support contact Dell Support.

23 thoughts on “When Less is More

  1.  Good to know that Dell, in addition to Intel, recognizes and is sensitive to power consumption.  I’ve had the anguish of a nice rack of PowerEdge server’s crash during peak hours due to power spike is.  The redundant power supplies are a great advantage, however when 2U suck up 5A, a 42U cabinet is generally not equipped with such capacity.

    6450 Power

     I’ve had a number of bad power experiences with the 6450’s.  Having 3 power supplies and several redundancies throughout the system architecture; it’s shocking that the system relied upon a single power cable.  This limitation meant that independent power supply units couldn’t be used and each time power usage in one PSU surged, the 6450’s were sure to die.

     At that time, Dell recommended and provided a split power cable; one which could plug into two outlets.  While that does work well with power outlet on different circuits, it doesn’t work in many datacenter scenarios.  Datacenters generally often supply two independent PSU’s which are not in phase.  Plugging into both would cause a short or deteriorate the AC/DC inverter.

    BTW- Thank you for starting a company blog.  As with any company, this gives an effective means to the inner personality and allows for a transparent exchange of information.  This is especially important as Dell customer support can only goes so far.

  2. Work on reliability first.  Your servers are borderline garbage, from flimsy cases that practically buckle if you lift them up by a corner to poor performance.  We gave up on them long ago and switched to the competition (IBM and HP) and have had a much better experience ever since.

    I would never reccomend Dell to anyone, especially in a server capacity.  Reliability and performance from them are a JOKE!

  3. Love the new 2950 – except for the new versarail system designed to support square-hole racks or round ones. The rotating coupler is a royal pain to use and we had to fight to get it to fit into our racks at Level(3). We almost had to scrap install a new 2950 because of these new rails. It’s like the recessed ethernet ports on the 1850s (thankfully fixed in the 1950s). Did these go through any serious usability testing?

  4. IBM servers? Get real. The only real challenge Dell has is HP (and HP is WELL in the lead).

    The 2950’s we have been using have been fine, as have the 2850’s and 6850’s. Never a problem.

    Now the IBM stuff we tried to use? It’s now sitting unused in the corner of the data center.

  5. David R.:

    Thanks for the feedback – we’ve been getting quite a bit both on Dell one2one and elsewhere in the blogosphere as Lionel mentions. We all definitely want the blog to be a conversation and will learn, adjust and tune along the way.    

    You are right on target with your comments about the 6450 power supply limitations.  The 6450 was one of our early 4-processor servers and we’ve learned since then based on similar feedback.  The 6450 had three power supplies and could nominally tolerate the loss of any one and keep running – this is usually referred to as a “2+1” topology.   The problem on the 6450 was that all three shared a common AC input plane, so you couldn’t tie them to completely independent AC circuits.   Bottom line: you could lose a power supply and be OK, but losing a power cord or a power circuit (say a UPS goes offline or a circuit breaker trips on one of the inputs) means you’re toast.

    Starting with the “6G” servers we changed the approach for redundant power supplies,  implementing two per system in a “1+1” arrangement.  The power supplies are now tied together only on the DC side and have separate cords that can be plugged into totally independent AC circuits.  The systems can now tolerate losing a power supply or a power cord or one of the AC input circuit without failing.  

    One of the unfortunate side effects of the power arms race was that in the past year or two high-end configurations of the 4-processor servers were drawing enough power that they exceeded the current limits on standard 110V power cords.   That meant that the input voltage had to increase to 220V and customers had to support 220V when they’d often only wired 110V.   Many customers were OK with that, but a few have been unpleasantly surprised.  Even with the arms race slowing or ending I’m afraid that 220V is here in the data centers to stay, both for 4-processor systems and for modular/blade systems.  

  6. We are no longer buying Dell servers. When one of our Dell servers comes to the end of its life (financially, mechanically or electronically) we replace it with HP. Even our financial person says the constant repair is so costly as to make the more expensive but more reliable HP servers a better value than the cheap Dell servers. We are even standing down the Dell servers that have not yet reached the end of their cost lifecycle in our facility and replacing them with HP.

  7. Forrest Norrod, Vice President of Engineering:  the one key question i love to ask and have never gotten a good answer from any vendor is this:

    most customer support lines are measured by duration of call and number of calls handled.  i’ve been ranting for about 20 years, that the length of the call is irrelevant and that the goal of the customer response center should be to reduce the number of calls by helping eliminate the problems that cause them; ie, to be part of the solution process.

    do you have "engineered into" your customer service center any kind of process that feeds back the complaints and problems to the engineering and design folks [or manufacturing organizations] who "created" the problems in the first place?

    if not, you’re managing the customer "help lines" the absolutely wrong way.

    thanks for listening.

  8. David G:

    FYI…  To respond to your comment about the 2950 rails, I’ve updated the bottom of my post with a vlog from Greg Henderson, one of our rack engineers.

    Glad to see you like the 2950.

  9. Having just got my hands on our first 1950 I am sadly disapointed. They have certainly taken on critisism of the 1850 and imporved the following areas:

    * Return to metal bezels, useful for self defence.

    * Removing the need for the unreliable dongles on the Ethernet ports

    * Chassis seems to have less structural plastic than the 1850, making it more sturdy.

    The one thing that I am particularly disapointed with is the decision to move the PSU placement again. Surely Dell realise this makes it a cabling nightmare in racks with a mix of generations, I was happy to be retiring the last of the 1650’s which also suffered from the psu placement issue.

    Legacy free was a minor niggle as the KVM I had in the lab was not USB, I can imagine this may catch some people out.

    I havent got my hands on a 2.5" drive version yet, but that certainly brings interesting potential for RAID options.

  10. Points to sort in relation to servers

    1.Universal drive caddy(as HP )

    2.Dual input power supply(for different circuits or UPS)

    3.Good Ram

    4.Parts need to be for minimum of 3 years not 3 months

    5.Most rackmount servers dont need printer,com,sound ports)

  11. plusaf said:

    "most customer support lines are measured by duration of call and number of calls handled.  i’ve been ranting for about 20 years, that the length of the call is irrelevant and that the goal of the customer response center should be to reduce the number of calls by helping eliminate the problems that cause them; ie, to be part of the solution process. "

    I tend to agree that some call centers are measuring their customer interaction in a way that even drives the support agent nuts.. things like average handling time… kill their best effort to resolve the customers issue..

  12. I just purchased a PE2950 with the new rail system.  We have a double 2-post open rack for our server with 12/24 threading in the round holes.  Guess what??….the new rail system DOESN’T FIT.  So far the only solution we have been able to come up with is to purchase an adapter from Rack Solutions which will cost us $100-$200 more than orginally budgeted.

    I would like to ask the same question David Geller asked….Did anyone at Dell test these rails for usability in different racks before they were release to market?  If so, you might want to consider firing ther person who completed the test!

  13. I have just installed a a 2950 into one of our Liebert racks.  I must say it was a tight fit.  Thankfully we use 23″ racks with adapter plates.  (The racks each have there own AC unit inside them, and we have found the added space inside from the 23 over the 19 helps keep more stable temps)  I had to flip the adapter plates around so that the indentation that normally makes the mounting holes line up with the rack holes now gave me the added length needed to mount the rails. 

    The build quality of the 2950 is much nicer then the older dell servers we have/had.  I stuck with the 6 bay 3.5″, as I’m not sold on the 2.5″ yet.  Maybe when the Drive options increase I will try them out. 

    Like mentioned above dells lack of legacy support for Keyboards/mice has caught me off gaurd a few times.  Both with this server and when replacing a desktop.  The desktop was a non issue since a keyboard is only $50.  But my multi-user KVMs are not so cheap. 

  14. I have just purchased a 2950 as well and the rails do not work my my standard 2-post 19-inch rack.  The mounting pegs on the server are spaced differently than the 2850, so the 2850 rails (which work pefectly) are useless.  Dell did not consider 2-post rails in their new rail design.  Needless to say, this is my LAST Dell server.  I am switching over to HP.

  15. We just bought THREE 2950’s… During setup configuration of the RAID controller, she commented that you chose to use LINUX.. Not a problem, EXCEPT that the 2950’s have NO PS/2 connectors (only USB) and while the mouse worked just fine, the RAID setup did NOT recognize the USB keyboard.  Stupid.  She had to use the mouse to accomplish anything, and even then, she could do things like enter IP addresses, etc into the Linux-based set up program for the server….

  16. I have just purchased a DELL PE 2950 and the rails (the new ones that convert from RapidRails to VersaRails) don’t fit our standard rack. I was told to purchase something from Racksolutions.com instead.

     I am disappointed. I’ll call into DELL to complain up the chain of command.
     

  17. One of our 2950’s servers with 2 x 72gb and 4 x 300gb drives is only seeing two out of the six installed drives via the RAID controller during the normal/initial installation.

    All drives are spinning-up with no problems.

    Any idea what may be causing this

  18. We need to switch from 208v power to 110v in one of our racks.  Is it possible, on a 2850 and 2950, to have one of the dual power supplies fed by 208v and the other fed by 110v simultaneously?  This will aid in our transfer without the necessity of powering down the servers.

  19. I too am trying to rack a PE 2950 and the Versa Rails won’t fit my rack, too long, I won’t be able to close my door. My 1750 and 1850s work just fine. I too am disappointed!

  20. Well I puchased my racks and PDU's from Dell and I have about 40 Servers and CX700 SAN. The PE2950 III will not fit due to the width and the 40 mm longer protrusion on the back side of the rails. So my options are to a live w/o a management arm, return the server, or buy a third party rail. Since I work for a major university anything third party is really a major pain to obtain between vendor certification and getting the vendor entered into the purchasing system. I really don't have the time for the additional hassle. I have to buy 7 servers this year to get ready for a major implementation – I need to get back to position I was in before when I simply purchased a server and put it into my racks without issue.

  21. I purchased a PE2950 and also (6) R200 Servers. The rails are expensive and dont work. They are way too long. They cover all the outlets on my PDU so that installing them will block my ability to plug the servers in. My only choice right now was to lay the 2950 at the bottom of the rack. Not my best option since it puts out more heat and heat rises. Not only that but the R200 round hole rails wouldnt even line up to connect to all 4 posts. I was happy to see that there was an adjustment slot that would allow it to fit. Then to my amazement I was shocked to see that the adjustable piece is riveted meaning you would have to destroy the rails in an effort to move the adjustment arm. What kind of fools designed these rails. Now I have $12K in equipment with a deadline fast approaching and no one returns my phone calls. Furthermore I call in and end up transferred to ateast 8 different departments before they eventually cant help me. Each rep asks for the same information before they will help me. Order number, name, confirmation of identity, then oops sorry but I have to transfer you somewhere else. Then the next person does the same exact thing. Finally I end up leaving a voicemail for someone and they never call back. I thought Dell was a good choice. I now find this blog and see I am not the only one going through this pain. Im out in LA trying to setup a datacenter and its costing me $200 a day to be here. Very nice! How in the world can they be a publicly traded company of this magnitude when they cant design a rail system that works. Its now apparent this is a widespread problem. I think I am stuck going through http://www.rackmountsolutions.com for some expensive solutions that will eat up more rack space which I am paying a pretty penny for. I advise anyone to go with HP instead. I never bought anything from Hp but from this read I feel it can only be better than what myself and many others are going through. I wish I found this article sooner. Did I also mention my MB on the 2950 lasted one day and then failed. How can I build out a datacenter 3000 miles from home without reliable equipment. This is going to be very costly. I may just extend my trip and return everything to Dell and make the switch to HP. This is an absolute nightmare!

Comments are closed.