Monday 31 August 2009

Last callout for conference in Copenhagen with Jim McKeeth

The conference introduces experienced Delphi developers to Delphi Prism and a number of .NET technologies. Conference agenda here and registration here (or write to jsj@sejer.dk). Price is 4700 kr., (which is about €631 or $900), including a night's stay at the very nice hotel.

Wednesday 26 August 2009

Why to use Firebird+IBX

Jeff Overcash originally made it clear, that IBX doesn't officially support Firebird, and he has no intention to implement it. Many point at IBObjects and now DBExpress for Firebird support, so why use IBX? Here's a reason:

IBX supports Interbase 6.0, which is basically equivalent to Firebird 1.0. Firebird 2.1 supports the Firebird 1.0 API by providing a compatible gds32.dll, so Firebird 2.1 also works with IBX. IBX is part of Delphi, and has been part of Delphi for a long time, meaning that when there is a new Delphi version, IBX will most likely be included, too. This gives IBX a big advantage over IBObjects and others. I assume that IBX will still be around in 5-10 years. In case something should break with Firebird, the source code of IBX can be modified to fit.

Does this make IBX the best choice for doing Firebird with Delphi? No. But as long as it works well, there will be no need to switch, and I guess that a lot of people start using IBX simply because it is included with Delphi.

Thursday 13 August 2009

No dynamic memory in programming

The biggest benefit of java and .net is garbage collection because it lowers the costs of training of programmers. That's what I heard originally. However, garbage collection also allows some constructs that are not easy otherwise.

The alternative? malloc() and similar, of course. You ask the operating system for a piece of memory, and you need to free it afterwards. This is usually handled by standardizing the ways that memory is administrated, so that it is very easy to verify, that all memory is deallocated correctly. Many tools, like Delphi, preallocate memory so that multiple memory allocation requests don't trigger multiple allocation requests to the operating system.

The third model: Stack-based memory allocation. That's what is done with local variables. Every time a function is started, memory is allocated, and every time it ends, the memory is deallocated. Simple and efficient, but the stack often has a limited size.

However, there is yet another model that many programmers don't have much experience with: No dynamic memory. It's very simple: You have a program, an amount of RAM, and that's it. You cannot allocate more RAM, or less RAM. This is often preferred when there is very little RAM available, when high reliability is required, or when you need to be able to predict the behavior in all conditions. Having very little RAM is becoming a rarity, but reliability and predictability are still good reasons.

Last time I did something like this, was using C in a Fujitsu CPU, and we had 32kbyte of RAM. It was an industrial product and my own PC had many MBs of RAM, but in order to keep costs low, they didn't want to add extra RAM. The system had a normal screen, printer and keyboard. I was only writing one of the modules, so in order to ensure a good API, I decided to make it object oriented and message-based... I implemented a number of functions, where the first was a constructor, the last was a kind of destructor, and the rest where event functions, like key presses etc. However, there was no need for a pointer to the RAM, because it was preallocated and at a fixed location.

The constructor basically initialized the memory area with correct start values, ensuring a valid state, and informed the user about the activation of this module. The destructor empties buffers and handled other cleanup of non-RAM ressources. In other words, very Windows 3.0 like.

Compared to modern applications, the limit on the available memory required the user interface to handle this well. This is actually less of a problem than many people believe, especially in embedded devices. The benefits are: Increased performance and increased reliability due to lack of memory management. "Out of memory" suddenly becomes a message that you create yourself. Compile-time range checks in a programming language can increase reliability even more, without sacrificing speed.

So, if you have plenty of RAM, why would you employ such techniques? Well, reliability and predictability may be external requirements that simply disallow dynamic behavior.

There are many programmers out there that live in an environment that does not use dynamic memory allocation.