Saturday, 4 July 2009

Jeff Atwood is wrong about performance

Jeff Atwood likes referring to his blog post about Hardware is cheap, programmers are expensive. where he writes: "Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always."

I totally disagree, of course, but here is why:

* The parts of hardware that complies with Moore's law is usually so fast, that it is not the bottleneck. I cannot remember when 100Mbps ethernet was introduced, and when they started to deploy 10Gbps networks, but many new virtual servers are limited to 10Mbps these days, and this does not smell like Moores law. Does anybody except me miss the good old days, where new harddisks had a lower seek time than the old ones?

* If you have upgraded all your servers without consolidating, year after year, you will notice that your electricity bill is going through the roof. It's very simple: Today, you are running that 1995 app on an extremely fast server, even though it was built for a slower server. You are simply using more energy to solve the same problem, and that's why everybody tries to reduce the amount of hardware these days. Many data centers are looking into energy efficiency, and they won't just put up a new physical server because you want to save programmer wages.

* Many speed improvements are not what they seem. 100Mbps ethernet is not always 10 times faster than 10Mbps ethernet, it's more complicated than that. The 20-stage pipeline of the Pentium 4 was also not an improvement for everybody.

* Many performance problems are not related to the number of bits per second, but to the latency. Sometimes latency goes up when speed goes up - I have seen several examples of things getting slower as a result of upgrading performance. The most well known example is probably that the first iPhone used GPRS instead of 3G, but that GPRS would actually make some things faster than if Apple had implemented 3G right away.

* If programming generally disregards performance, the performance problem is not solved by improving hardware speed 10 times or 100 times. A large application that is written totally without regard for performance, can easily be more than 1000 times too slow. I have been a troubleshooter on several projects, where the application performed more than 1000 times faster when I left.

But here is the most important reason why programmers should be solving performance problems:

* It takes one programmer little time to design things well, but it takes technicians, users, other programmers, testers, etc. a lot of time to wait when the software is slow. Bad performance costs a huge amount of money.

Here are some good tips:

* Performance improvements of less than a factor 2 are often not worth spending time on. Do the math and find out how big the performance improvement factor really is, before spending time or money on solving it. If your best finding improves less than 2 times, then you need to search harder.

* Do the math on the benefits. If your turnover improves 2 times because your website becomes more responsive, the entire organization should support your efforts.

* The later you start caring about performance, the more expensive it gets.

Anyway, it seems that Jeff Atwood has realized, that he can run into problems, too.

29 comments:

Fabio Gomes said...

This reminds me of USA cars, what's the solution to all problems?
More horse power... then the economical crisis came :)

Anonymous said...

{More horse power... then the economical crisis came :)
}

?????

The problem is that more horse power was available if you had the money to pay for it. The housing market was partly caused by the govt mandating loans to people that could not afford it, and allowing banks to package these worthless assets and sell them. If you could afford it, then great if not then you should not have it. Those who can afford the horse power deserve it. The people that can't then take the bus. I'll throw some coupons your way as your sweeping my estate. You name it, affordable housing , govt healthcare, min wage, food stamps. All there to raise the self esteem of the non capable. Pull you yugo off the road so my BMW can do 200kph on the A5. The crisis is when I have to slow down so your cracker box of a "car" cant make it up a hill.

Fabio Gomes said...

Well that was not my point, the point is that people isn't concerned about the cost of something till they can't afford it anymore.

Why do you need a 5.0 V6 car to go to work and stay put in traffic using just first and second gears spending several gallons of gas, polluting the air and spending lots of money on it?

I think the default answer for this is "because I can", am I wrong?

Javier Santo Domingo said...

I remember that Henry Ford said "never give a man a job a machine can do" or something like that, and seems the software industry sometimes thinks the same, but as you show, that phrase is wrong most of times (and i mean besides the bad social impact of that kind of politic).

Anonymous said...

NO, because I like it. I CAN because I CAN do it. I'd rather be driving slow listening to Mozart in my car than be on the bus with all the poor saps on thier way to the sweat shop they call work. I work from home so my footprint is small. I cool off in the pool. Keep warm in the hot tub, all solar powered. I can because I can afford it. Maybe those that can't afford it should pedal a bike to work like I did. Again small footprint. Pollution. Why dont you worry about how much polution the Chinese pump out. The energy saving light bulbs the govt wants us to use do more harm than good with the mercury content. Wasting my income on this so the govt can spread it around to the poor will not help this contrived panic of global warming. I can use it to hire more people. Make them rich. Less people on the bus. Better quality of life. I dont think the US problem with illegal immigrants is because they want to pick fruit in California. They want to get out of that life and give their children a better one. Oportunity for prosperity. The best way to save the world is to make them rich. The hand outs dont save the world. Why would a chinese person hide in the wheel well of a 747 and freezes on their way out of the mother land. Why leave that paradise for the greedy west? Wake up or get out of the way and be happy to cash you govt check and live like a pauper. See that white flash with the BMW symbol, that is me speeding past you to a happier life.

Anonymous said...

By bad social impact do you mean bad that a computer/robot now does the work of many laborers in a sweat shop? A computerized digger can do the job of many men with one person down a dangerous coal mine. Henry ford was right. James Watt was right. There have been many worker uprises against technology, and yet here we are. I have to say with a better standard of living across the world. The hope of a better life for them and their kids motivates people to be better, learn more, get the better job. Some can't do it. Again, they will come and sweep the floors for the chance to make a better life. Keep on sweeping. Perhaps your kids will make somthing of themselves.

LDS said...

Atwood is wrong because there are whole classes of speed issues that can't be resolved throwing more hardaware at them. Concurrency, scalability, etc. need a good design and proper coding. You can but one terabyte of RAM and the fastest 16 core processors for your database, but if calls gets serialized because of poor design and coding it will be slow and won't scale - and adding even more hardware won't solve anything.

unused said...

It is a balance, much like was mentioned in Lars' article. You should talk the 1000x performance bottlenecks, but the 2x bottlenecks should be handled by better hardware.

Just like when you are looking at the runtime of an algorithm. If you can reduce it by an order of magnitude: from n! to just n, then you should do it. But if you are only looking at a move from 3n to 2n then you are most likely wasting your time.

Eric said...

Well, Moore's Law hasn't been stalled just yet, but its benefits surely have.
Moore's Law is about cramming more logic in the same space, but when you're limited by CPU speed, it's megahertz you're after, and current Moore Law advances are about more cores per CPU.

Programming for more cores (or more servers) can often be far more expensive than just getting those 1000x speedups on a single thread.
And those 1000x speedups sure do abound, especially when devs who designed or wrote the code didn't consider performance, or thought the hardware would solve their performance issues.

Anonymous said...

I was reminded of this:

http://xkcd.com/386/

But on a more serious note. I do agree optimizations are important. While Stack Overflow is a huge hit I really doubt it actually requires 48 GB of ram. I guess even if they went for a complete in-memory-database they would still not fill up 32 GB.

andri said...

You make some interesting points but that is a very trollish title, shame on you.

Alan D Huffman said...

This is what I dislike about programmers... They attack arguments using semantic detail instead of considering the spirit of the argument.

So let's do the numbers:

Assume you have a performance issue (like compiling your code). You have 2 options:

A) Optimize the compiler using 1 programmer for a month (or months -- who knows if you'll be successful or how long it will take)

B) Buy faster development boxes at $2000 a pop.

If a programmer makes $8000 every a month, and a new development box is $2000. You can buy 4 boxes for every month of development.

Now, to your point, sure if the better boxes don't solve the problem -- and I don't think Jeff would disagree -- spend the time fixing the issue.

This is a rule of thumb... and Jeff is right. As a rule of thumb: ALMOST ALWAYS isn't a bad assumption.

Anonymous said...

Jeff Atwood is always so full of shit that you should not be reading his blog at all.

Anonymous said...

wow. you are dead wrong on almost /every/ single point. too many to write a proper rebuttal!

ill pick... one thing.

A fast machine in 1995 probably uses one third of the power of a modern quad core machine. you assume that 1mhz has a fixed power cost that did not change. all the new innovations in "moores law" are in fact power consumption.

Warren said...

Any maxim is wrong as often as it is right. (Including this one.)

People always want silver-bullet maxims, silver-bullet techniques. This blog post points out that people call hardware cheap without thinking about TCO. True enough.

It's always a balance. Using a stupid algorithm and then throwing hardware at the stupid algorithm, instead of taking five minutes to write a proper algorithm, index and optimize your database access correctly, and so on, is just stupid.

That being said, endless optimization is also pointless. One has to have some real idea of the real world cost of one's actions, and pick a middle-road path. Programmers should be enough of a perfectionist to find faults in their products and improve them, and enough of a pragmatist, to get it shipped and stable.

This is a fine balance, something Jeff has part of nailed, and something this post has part of nailed. Nobody has all the wisdom and all the right answers all the time.

Warren

Dave Ackerman said...

*rolls eyes* nice blog title. You certainly got me to click through, so I guess it worked. However, it's pretty hard to say Jeff Atwood is "wrong". His blog entries usually are (purposely) vague enough that they can't reasonably be disputed. Of course there are some problems in which throwing more hardware at the problem is a bad idea. Of course, sometimes it is much easier to just optimize a little. Mostly Jeff just wants people to read his entry and then discuss - he knows his own ideas pale in comparison to the legions of smart programmers that will comment.

Shane said...

Fabio et al, please disregard "Anonymous"'s rantings.

He most assuredly does not speak for the average American.

Your point, that people don't care how expensive something is until they can no longer afford it, is spot-on.

As for the American lust for powerful cars and large trucks.. that is more murky. Many factors, the design of our cities, cheap gasoline, and our vast, wide-open land all play their part.

The same factors work in the opposite direction in Europe. Ancient cities, expensive gas, etc.

About the article.. I agree with Atwood as long as I don't treat his advice as absolutist. And I agree with the Author, as long as I don't take *his* advice as absolutist.

But I get that taking a strong position and sticking to it--as both authors have done--is rhetorically pleasing.

Anonymous said...

The original argument of throwing horsepower at a problem is too simplistic. One good programmer and design is often worth dozens of mediocre programmers, millions thrown at performance, etc. Many times you can't overcome bad design in a database for example with a larger processor or more RAM. Perfect example of this is watching databases that slowly outgrow available RAM. In development they scream because everything loads into memory. In production they suffer from massive performance problems because the VLDB can no longer pack its indexes into memory. Correcting a problem like this in production can cost countless dollars all because a company wanted to shave a small amount of their dev budget. If you have a multi-million dollar app, spending money on quality development goes a long way to saving you and your customers down the line. If your devs have created a performance problem and you don't want to spend money to fix it you likely aren't going to be in business for long because that thinking tends to proliferate across the spectrum. There are exceptions to every rule, but I have found quality developers to be worth their weight in gold to a software company. Savings in testing, hardware costs, software releases, reduced support from fewer bugs, better customer satisfaction leading to good word or mouth, etc are numbers that often don't show directly on a spreadsheet.

Jason Miller said...

Actually, you're *both* wrong because you're treating "the programmer" as if it's just another part of the machine. Programmers are strange beasts that vary in quality and efficiency of output, where the good ones are x5-x28 more efficient and effective than the poor ones (see also Glass, Brooks, McConnell, et al). If you were to claim to have an average programmer doing the work, then the most economical solution for everybody is to have the average programmer not worry about performance and then have their output worked over by a superior programmer because programmer pay is not scaled to the quality of their output such that the superior programmer is functionally cheaper than the inferior programmer (Glass). The downside to this is that you may end up with a functional model where your superior programmers feel like they're babysitting the inferior programmers which is the exact opposite of (Brooks') surgeon model.

The possibility that performance sinks are being introduced by the wanton ignorance of non-programming architects issuing decrees as they dash from meeting to meeting is not covered in this analysis. Hopefully you've got an actually-programming developer with the quality and reputation necessary to actively rebuff bad ideas instead of implementing them, optimizing them, and then realizing that it was all a big mistake later.

Samuel A. Falvo II said...

Actually, folks, Lars is spot-on in his analysis. Too many people just assume that you can throw machines at the problem and be done with it, waiting every 18 months for a new performance boosts.

Where I disagree with Lars is he's assuming hardware, but I'm thinking the real problem is software.

How many servers are written Java or C# these days? Tons. And how many have lackluster performances? Tons.

Why is it that C still outpaces Java and C#? Both are compiled to native binaries these days (Java with HotSpot, and CLR was built from the get-go for this too).

I love how most organizations have front-end websites powered by PHP, Ruby, Python, et. al. These languages are pitiful in performance compared to a language that is every bit as dynamic but tons faster: Objective-C. Hell, even a half-way decent Forth environment outpaces Python and Ruby without all the fancy run-time optimization!

And, let's remember, every CMOS component suffers a common flaw: you suck power on every transistor state change. The faster the clocks and the more transistors, the more states change per second, so it becomes increasingly important to optimize your software for minimum clocks spent per unit of workload if you wish to cash in on power savings. So far as I can tell, high-level languages are never concerned with this. Software managers are utterly oblivious, unless you're Google, at which point invest in teams of programmers to at least research the idea. Even Moore cannot break the laws of physics.

This is why I insist that all engineers should spend at least a year or two hacking raw, assembly language. Get familiar with the vehicle you're intending to drive -- because, after all, a 1980 Mazda RX-7 with its pipsqueak 100HP engine CAN run circles around a 250+ HP behemoth Mustang on any track not a straight line if you understand the car, its limitations, and its opportunities. It's all in how you drive it. The logic applies to computing hardware too.

artsrc said...

Hardware is cheap, programmers are expensive, and some performance problems are not solved by more hardware.

The problem of structuring your software so that more hardware solves performance problems, is called designing for scalability.

This is sometimes automatic (you don't have to do anything), something manual, you have to do something to make it work, and sometimes not impossible.

Reboltutorial said...

Performance should be done at the system level so that one can concentrate on business level.

Also Pareto's Law prevails in Economics World so I would say it's not worth spending 80% of time increasing 20% of marginal performance.

Anonymous said...

I was only able to read about half of the comments... :)

I agree with you, programmers should always try to get the best possible performance out of the hardware!

I personally do not code my loops in assembly for performance but I do program everything to perform at full without "just adding hardware".

Google has the best hardware cluster of all time, and even google makes their programmers put the best effort possible to get the best possible performance out of their hardware.

Sorry for my poor english

LDS said...

Justo some more notes:
1) A 2x improvement may be worth the time spent, depending on where it can be achieved: improving a client-side, user invoked function that may take 2s insted of 4s may be wasted time. Improving a multi-hour DW query from 8h to 4h, or doubling a server transaction rate may be useful anyway.
2) Hardware does not process anything alone - SW and thereby licenses should be paid too. Maybe adding a new Linux/Apache server may be cheap - but try to add a new Oracle RAC 8 core node with some options (say RAC, Partitioning and Advanced security): its price could easily pay for a programmes six months or more...
3) Hardware may be expensive too, maybe not the low-end web server, but high-end servers and their "accessories" (SAN FC enclosures, switches, 15K disks, cooling, etc.) can easily be. And hardware gets old faster than good code.

Thomas said...

Don't forget to mention that a "slow system" is often a system consisting of "crappy code" - which per se means that it's more difficult to maintain...

Unknown said...

I think hardware can act as a temporary buffer to the problem while programmer finds out what the bottleneck really is. Because sometimes the cost of finding that performance bug is higher than adding a few upgrade to the hardware. I think Jeff means the same.

Anonymous said...

* The later you start caring about performance, the more expensive it gets.

Well, yes and no.

I'll stick with Don Knuth:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Eric said...

That Don Knuth quote is probably one of the most misunderstood quotes out there.

Unknown said...

If you use software performance engineering (SPE) as process for discovery (and understanding) of both software & system execution models that can be used by many different teams (dev, test, ops), and across releases and application life cycles then you will quickly realize that such debates are irrelevant.

Throwing hardware at a problem might be valid decision but only following the analysis of the problem which should not be done in a reactive manner but throughout the development and maintenance of an application.

We do not test an application to break it under load which unfortunately is all to common a perspective but to learn about its behavior under different environment contexts. Understanding and a working model should be the deliverable not some bell shape chart.

Performance engineering is much more enjoyable and rewarding when it focuses on capturing and modeling the dynamic behavior of the software and system rather than running a few checks against the source code files.

With the knowledge acquired with the process we might very well propose such a purchase decision but this must be based on facts rather than hope.