tag:blogger.com,1999:blog-4434108347727659251.post451993067890521659..comments2022-06-29T08:53:03.580+02:00Comments on Compas Pascal: Jeff Atwood is wrong about performanceUnknownnoreply@blogger.comBlogger29125tag:blogger.com,1999:blog-4434108347727659251.post-39487623927168109592009-07-08T13:44:59.136+02:002009-07-08T13:44:59.136+02:00If you use software performance engineering (SPE) ...If you use software performance engineering (SPE) as process for discovery (and understanding) of both software & system execution models that can be used by many different teams (dev, test, ops), and across releases and application life cycles then you will quickly realize that such debates are irrelevant.<br /><br />Throwing hardware at a problem might be valid decision but only following Unknownhttps://www.blogger.com/profile/00414063535114178838noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-60089966061442968792009-07-07T08:31:45.168+02:002009-07-07T08:31:45.168+02:00That Don Knuth quote is probably one of the most m...That Don Knuth quote is probably one of the most misunderstood quotes out there.Erichttp://delphitools.infonoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-18723617128165840142009-07-07T01:15:17.402+02:002009-07-07T01:15:17.402+02:00* The later you start caring about performance, th...<i>* The later you start caring about performance, the more expensive it gets.</i><br /><br />Well, yes and no. <br /><br />I'll stick with Don Knuth:<br /> <b>We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.</b>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-52829384913461320492009-07-06T11:49:08.790+02:002009-07-06T11:49:08.790+02:00I think hardware can act as a temporary buffer to ...I think hardware can act as a temporary buffer to the problem while programmer finds out what the bottleneck really is. Because sometimes the cost of finding that performance bug is higher than adding a few upgrade to the hardware. I think Jeff means the same.Anonymoushttps://www.blogger.com/profile/09850210092243197547noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-79136128288442977452009-07-06T10:53:48.399+02:002009-07-06T10:53:48.399+02:00Don't forget to mention that a "slow syst...Don't forget to mention that a "slow system" is often a system consisting of "crappy code" - which per se means that it's more difficult to maintain...Thomashttps://www.blogger.com/profile/05855305118199368083noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-66974646540708024482009-07-06T09:57:22.518+02:002009-07-06T09:57:22.518+02:00Justo some more notes:
1) A 2x improvement may be ...Justo some more notes:<br />1) A 2x improvement may be worth the time spent, depending on where it can be achieved: improving a client-side, user invoked function that may take 2s insted of 4s may be wasted time. Improving a multi-hour DW query from 8h to 4h, or doubling a server transaction rate may be useful anyway.<br />2) Hardware does not process anything alone - SW and thereby licenses LDShttps://www.blogger.com/profile/04633789460476801953noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-84435217911298289002009-07-06T07:21:35.908+02:002009-07-06T07:21:35.908+02:00I was only able to read about half of the comments...I was only able to read about half of the comments... :)<br /><br />I agree with you, programmers should always try to get the best possible performance out of the hardware!<br /><br />I personally do not code my loops in assembly for performance but I do program everything to perform at full without "just adding hardware".<br /><br />Google has the best hardware cluster of all time, Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-51706857049523640562009-07-06T03:39:24.928+02:002009-07-06T03:39:24.928+02:00Performance should be done at the system level so ...Performance should be done at the system level so that one can concentrate on business level. <br /><br />Also Pareto's Law prevails in Economics World so I would say it's not worth spending 80% of time increasing 20% of marginal performance.Reboltutorialhttp://reboltutorial.comnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-8209097369655826322009-07-06T02:25:43.747+02:002009-07-06T02:25:43.747+02:00Hardware is cheap, programmers are expensive, and ...Hardware is cheap, programmers are expensive, and some performance problems are not solved by more hardware.<br /><br />The problem of structuring your software so that more hardware solves performance problems, is called designing for scalability.<br /><br />This is sometimes automatic (you don't have to do anything), something manual, you have to do something to make it work, and sometimes artsrchttps://www.blogger.com/profile/08224958231370433683noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-74755667333660681112009-07-06T01:22:39.834+02:002009-07-06T01:22:39.834+02:00Actually, folks, Lars is spot-on in his analysis. ...Actually, folks, Lars is spot-on in his analysis. Too many people just assume that you can throw machines at the problem and be done with it, waiting every 18 months for a new performance boosts.<br /><br />Where I disagree with Lars is he's assuming hardware, but I'm thinking the real problem is <i>software</i>.<br /><br />How many servers are written Java or C# these days? Tons. And Samuel A. Falvo IIhttps://www.blogger.com/profile/11523132404727383627noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-65046607399935073362009-07-05T22:30:23.638+02:002009-07-05T22:30:23.638+02:00Actually, you're *both* wrong because you'...Actually, you're *both* wrong because you're treating "the programmer" as if it's just another part of the machine. Programmers are strange beasts that vary in quality and efficiency of output, where the good ones are x5-x28 more efficient and effective than the poor ones (see also Glass, Brooks, McConnell, et al). If you were to claim to have an average programmer doing Jason Millerhttps://www.blogger.com/profile/07636632742071516888noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-35575866646270645612009-07-05T21:19:09.888+02:002009-07-05T21:19:09.888+02:00The original argument of throwing horsepower at a ...The original argument of throwing horsepower at a problem is too simplistic. One good programmer and design is often worth dozens of mediocre programmers, millions thrown at performance, etc. Many times you can't overcome bad design in a database for example with a larger processor or more RAM. Perfect example of this is watching databases that slowly outgrow available RAM. In developmentAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-54671310205386230992009-07-05T19:48:08.703+02:002009-07-05T19:48:08.703+02:00Fabio et al, please disregard "Anonymous"...Fabio et al, please disregard "Anonymous"'s rantings. <br /><br />He most assuredly does not speak for the average American. <br /><br />Your point, that people don't care how expensive something is until they can no longer afford it, is spot-on. <br /><br />As for the American lust for powerful cars and large trucks.. that is more murky. Many factors, the design of our cities, Shanenoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-83913886795799185042009-07-05T19:33:04.837+02:002009-07-05T19:33:04.837+02:00*rolls eyes* nice blog title. You certainly got m...*rolls eyes* nice blog title. You certainly got me to click through, so I guess it worked. However, it's pretty hard to say Jeff Atwood is "wrong". His blog entries usually are (purposely) vague enough that they can't reasonably be disputed. Of course there are some problems in which throwing more hardware at the problem is a bad idea. Of course, sometimes it is much easierDave Ackermanhttps://www.blogger.com/profile/15439243804403661750noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-59758736004398848462009-07-05T19:31:30.790+02:002009-07-05T19:31:30.790+02:00Any maxim is wrong as often as it is right. (Incl...Any maxim is wrong as often as it is right. (Including this one.)<br /><br />People always want silver-bullet maxims, silver-bullet techniques. This blog post points out that people call hardware cheap without thinking about TCO. True enough.<br /><br />It's always a balance. Using a stupid algorithm and then throwing hardware at the stupid algorithm, instead of taking five minutes to Warrenhttps://www.blogger.com/profile/04053407632823479165noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-5835424767780268312009-07-05T17:59:23.608+02:002009-07-05T17:59:23.608+02:00wow. you are dead wrong on almost /every/ single p...wow. you are dead wrong on almost /every/ single point. too many to write a proper rebuttal! <br /><br />ill pick... one thing.<br /><br />A fast machine in 1995 probably uses one third of the power of a modern quad core machine. you assume that 1mhz has a fixed power cost that did not change. all the new innovations in "moores law" are in fact power consumption.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-37577066548254443402009-07-05T16:28:50.436+02:002009-07-05T16:28:50.436+02:00Jeff Atwood is always so full of shit that you sho...Jeff Atwood is always so full of shit that you should not be reading his blog at all.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-26817371164984887632009-07-05T16:16:52.424+02:002009-07-05T16:16:52.424+02:00This is what I dislike about programmers... They a...This is what I dislike about programmers... They attack arguments using semantic detail instead of considering the spirit of the argument.<br /><br />So let's do the numbers:<br /><br />Assume you have a performance issue (like compiling your code). You have 2 options:<br /><br />A) Optimize the compiler using 1 programmer for a month (or months -- who knows if you'll be successful or howAlan D Huffmanhttps://www.blogger.com/profile/13850073161349585968noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-78583920427761883512009-07-05T15:56:23.592+02:002009-07-05T15:56:23.592+02:00You make some interesting points but that is a ver...You make some interesting points but that is a very trollish title, shame on you.andrihttps://www.blogger.com/profile/11894894795352112104noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-60839618420171894912009-07-05T14:56:06.120+02:002009-07-05T14:56:06.120+02:00I was reminded of this:
http://xkcd.com/386/
But...I was reminded of this:<br /><br />http://xkcd.com/386/<br /><br />But on a more serious note. I do agree optimizations are important. While Stack Overflow is a huge hit I really doubt it actually requires 48 GB of ram. I guess even if they went for a complete in-memory-database they would still not fill up 32 GB.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-80101899643178705802009-07-05T00:04:29.737+02:002009-07-05T00:04:29.737+02:00Well, Moore's Law hasn't been stalled just...Well, Moore's Law hasn't been stalled just yet, but its benefits surely have.<br />Moore's Law is about cramming more logic in the same space, but when you're limited by CPU speed, it's megahertz you're after, and current Moore Law advances are about more cores per CPU.<br /><br />Programming for more cores (or more servers) can often be far more expensive than just Erichttp://delphitools.info/noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-89039522711709976172009-07-04T23:47:23.363+02:002009-07-04T23:47:23.363+02:00It is a balance, much like was mentioned in Lars&#...It is a balance, much like was mentioned in Lars' article. You should talk the 1000x performance bottlenecks, but the 2x bottlenecks should be handled by better hardware. <br /><br />Just like when you are looking at the runtime of an algorithm. If you can reduce it by an order of magnitude: from n! to just n, then you should do it. But if you are only looking at a move from 3n to 2n thenunusedhttps://www.blogger.com/profile/15713136719615033297noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-33035024785217160012009-07-04T21:37:59.533+02:002009-07-04T21:37:59.533+02:00Atwood is wrong because there are whole classes of...Atwood is wrong because there are whole classes of speed issues that can't be resolved throwing more hardaware at them. Concurrency, scalability, etc. need a good design and proper coding. You can but one terabyte of RAM and the fastest 16 core processors for your database, but if calls gets serialized because of poor design and coding it will be slow and won't scale - and adding even LDShttps://www.blogger.com/profile/04633789460476801953noreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-33237761999376506882009-07-04T18:40:10.750+02:002009-07-04T18:40:10.750+02:00By bad social impact do you mean bad that a comput...By bad social impact do you mean bad that a computer/robot now does the work of many laborers in a sweat shop? A computerized digger can do the job of many men with one person down a dangerous coal mine. Henry ford was right. James Watt was right. There have been many worker uprises against technology, and yet here we are. I have to say with a better standard of living across the world. TheAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-4434108347727659251.post-68262978503311175082009-07-04T18:25:41.735+02:002009-07-04T18:25:41.735+02:00NO, because I like it. I CAN because I CAN do it....NO, because I like it. I CAN because I CAN do it. I'd rather be driving slow listening to Mozart in my car than be on the bus with all the poor saps on thier way to the sweat shop they call work. I work from home so my footprint is small. I cool off in the pool. Keep warm in the hot tub, all solar powered. I can because I can afford it. Maybe those that can't afford it should pedal Anonymousnoreply@blogger.com