Friday 20 February 2009

Good blog post about performance - also applies to Delphi

Delphi has a great flexibility in how databases are accessed and handled, and therefore often perform really, really well. However, the urge to improve, is never-ending, and here is a great article for those, that really want to know about performance:

http://highscalability.com/numbers-everyone-should-know

The article focuses on web apps, but actually applies to almost any kind of application.

Saturday 7 February 2009

Windows Reversi - playing online against an algorithm?

Can somebody please explain to me, why the opponents in the game of Reversi in Windows XP are almost always Greek beginners, German Intermediates or English experts? To me, they seem to be computer algorithms. If I am right, I see two possible explanations: 1) There are not enough Windows users to support online games like this. 2) Humans don't behave according to Microsoft quality standards. I tried to Google this, but didn't find a good explanation. If somebody knows the Greek guy, please tell him that there are books out there that can help him improve.

Friday 6 February 2009

Mixing waterfall and agile

Here is a typical story of waterfall vs. Agile: http://dotnet.dzone.com/news/we-dont-have-requirements-yet. If you ever run into such a situation, I can recommend to read Agile Estimating and Planning. It will solve your problem.

Thursday 5 February 2009

Parallelizing can make things slower.

I wrote the text below on another blog about parallelism, 4 months ago. It is a hypothetical example, but based on a true story, with a very good point. I guess we all know the feeling of clicking the start button in Windows after booting, and nothing happens for several seconds...

"You want data to be sorted. You may start out with a standard sort algorithm. Suddenly you find, that it is slow. You find out, that you can increase the speed 5 times by replacing quicksort with mergesort. Then, you find out, that you can increase the speed 10 times more by using an implementation, that knows how to use the CPU cache well. Then, you realize, that Mergesort can use more CPUs, so you use more CPUs. Suddenly, your application is 100 times slower, because the writes to memory from one CPU is flushing the cache from the other CPU. This didn't happen on your development system, because the dual-CPU system was made in a different way... so you realize that you need to make the parallel programming differently.

Finally, you find that you need to separate the lists, that you merge in your mergesort algorithm, better. After a while, your algorithm works well on your development system and on the target system.

Finally satisfied, you put the parallel mergesort algorithm into all places where it makes sense. After a short while, the test department comes back, and reports that the application runs slower when your multithreaded sort algorithm is used. The reason is, that it now uses multiple CPU caches, leaving less cache memory for other parts of the application, slowing down these parts more, than the speed benefit of using multiple CPUs for sorting.

You are finally asked to remove the parallel processing from your mergesort in order to increase the speed of the system."

Garbage collection or garbage piling up?

If you haven't tried .net or java, yet, you may want to have a look at this example of how garbage collection doesn't free your mind from thinking about memory allocation and object destruction.