The biggest benefit of java and .net is garbage collection because it lowers the costs of training of programmers. That's what I heard originally. However, garbage collection also allows some constructs that are not easy otherwise.
The alternative? malloc() and similar, of course. You ask the operating system for a piece of memory, and you need to free it afterwards. This is usually handled by standardizing the ways that memory is administrated, so that it is very easy to verify, that all memory is deallocated correctly. Many tools, like Delphi, preallocate memory so that multiple memory allocation requests don't trigger multiple allocation requests to the operating system.
The third model: Stack-based memory allocation. That's what is done with local variables. Every time a function is started, memory is allocated, and every time it ends, the memory is deallocated. Simple and efficient, but the stack often has a limited size.
However, there is yet another model that many programmers don't have much experience with: No dynamic memory. It's very simple: You have a program, an amount of RAM, and that's it. You cannot allocate more RAM, or less RAM. This is often preferred when there is very little RAM available, when high reliability is required, or when you need to be able to predict the behavior in all conditions. Having very little RAM is becoming a rarity, but reliability and predictability are still good reasons.
Last time I did something like this, was using C in a Fujitsu CPU, and we had 32kbyte of RAM. It was an industrial product and my own PC had many MBs of RAM, but in order to keep costs low, they didn't want to add extra RAM. The system had a normal screen, printer and keyboard. I was only writing one of the modules, so in order to ensure a good API, I decided to make it object oriented and message-based... I implemented a number of functions, where the first was a constructor, the last was a kind of destructor, and the rest where event functions, like key presses etc. However, there was no need for a pointer to the RAM, because it was preallocated and at a fixed location.
The constructor basically initialized the memory area with correct start values, ensuring a valid state, and informed the user about the activation of this module. The destructor empties buffers and handled other cleanup of non-RAM ressources. In other words, very Windows 3.0 like.
Compared to modern applications, the limit on the available memory required the user interface to handle this well. This is actually less of a problem than many people believe, especially in embedded devices. The benefits are: Increased performance and increased reliability due to lack of memory management. "Out of memory" suddenly becomes a message that you create yourself. Compile-time range checks in a programming language can increase reliability even more, without sacrificing speed.
So, if you have plenty of RAM, why would you employ such techniques? Well, reliability and predictability may be external requirements that simply disallow dynamic behavior.
There are many programmers out there that live in an environment that does not use dynamic memory allocation.
1 comment:
No malloc? Kinda like the mighty COBOL :)
Post a Comment