Sunday, 8 March 2015

35 years old Arduino-like setup with Pascal

I do not care much about products or constructions from the past, that cannot be reproduced or used, but in this case, my brother's old construction from the late 1970s made me look again, because it looks so much like what people do with Arduino kits.

This is a computer programmed using Compas Pascal or Turbo Pascal:


  1. CPU: Zilog Z80, of model Z80CPU01. This is probably running 1-2MHz. The Z80 was an extended version of the 8080 CPU, on which the 8086/8088/Intel CPU line of CPUs was built.
  2. One EPROM for the runtime
  3. One EPROM for the actual program
  4. 5x4 keyboard on the back, suitable to be used as a 4x4 hex keyboard with some more keys.
  5. Some RAM chips etc.
  6. I/O pins for easy access.
Unlike today, where we can program Arduino boards etc. using flash, easily, that kind of technology was unavailable or extremely expensive. Therefore, EPROMS were used, which could easily be erased using an ultraviolet lamp, and easily re-programmed using this board, which used the Centronics port for I/O to the programmer's computer, as USB was not invented back then.

The main computer, that was used for programming, could be a Zilog Z80 Nascom computer, a CP/M-80 computer, or any homemade computer that was using a 8080 or Z80 CPU.

Compas Pascal and Turbo Pascal were obvious choices for this, as the compiler was fast and generated great code, and therefore boosted programmer productivity a lot.

Wednesday, 7 January 2015

Test-driven programming guidelines

We all know about test-driven development. How many actually do this? Well, medical device software is made this way. Maybe the tests are not automated, but developers are required to establish verifiable requirements before starting to write source code. Since you only know if a requirement is verifiable, if you know the verification method, that means that you must know a verification method, i.e. test, before you can start programming.

Let's go one level up: What about programming guidelines? For instance, if your company has introduced the OOD methods mentioned on this page by Dr Bob, how do you verify that they are actually used?

Let's take one example: Derived classes must be substitutable for their base classes. How do we verify that this works? Well, first we need to find all classes that have been derived from another class, and then we need to verify that this class works in a test that works with the base class. We can usually find all class-derivations by scanning the source code, so scanning the source code would be part of the verification method. Next, we get a list of class names, and base classes, and we then must verify, that these classes are substitutable. This can be fairly easy on TStringList inheritance, but how do you verify that TXyzClass works instead of TAnotherClass? It quickly becomes a lot of work.

So, seriously, what if there is one part of the code that violates these programming guidelines, will you care? If you don't care, but you still want to verify compliance on each software release, then you need to mark this class as approved non-compliant so that you can skip it next time, and it does not show up on your error-list on the next product release.

Most people will probably end up saying: We do not verify that we are in compliance with this principle, as that would be too much work. It is only meant to be a guideline or principle... and if they are still ambitious, they might make random checks to measure compliance, and/or make senior developers review code from younger developers before the final commit. But that means that you cannot tell outsiders, whether you conform to the principles, or not. Instead, you can only be sure that you are not conform to the principles. You can only say that you try to be as much as possible.

To outsiders, this is vague information about what is going on in a software development department, which usually reduces the quality of communication to other parts of the company. So, you may want to ask yourself this question: What rules does your source code actually follow? Can you prove it?

Tuesday, 26 February 2013

Turning the V-model upside down: The A-model

The V-model is well known in many software development models:

However, this drawing, which is very representative for V-model drawings, has several inconsistencies. One of them is the arrow, that goes back in time. Another one is, that it puts Operation and Maintenance into the Project Test and Integration side. I have seen many V-model drawings, and I never really felt comfortable with them. The reason is simple: The V-model drawings all assume a time axis, i.e. at the end of the project, you check if the software actually works... there is no iterative development, there is no sanity check in the middle of the software development etc.

If one assumes, that acceptance criteria and test/verification criteria are made before the software is developed, as it is known from test driven development and other methodologies, the time axis should be from top to bottom, instead of from left to right. Once you do that, the next thing is: Why is the model occupying most space at the top, and most work at the bottom? Let's turn it all around, and make the A-model:

Now we have appropriate sizing of the boxes, and the confusing time axis has been removed. Just like the V-model, it is possible to have different layers in the pyramid, and use different terminologies and methods.

So, where did the time axis go? That could look something like this:

  • Project is initiated, with user needs and validation criteria
  • Project is planned, with design input and verification methods and acceptance criteria.
  • Project is designed, which includes the creation of automated integration tests and unit tests. These may be done in parallel and iteratively, and it is possible to apply several design reviews, validations, tests etc. as part of this iterative process. The entire A-model's test-side will be applied, and the entire SPECs-side of the A-model may be changed as part of the iterative model.
  • Project has it's final test, which may include formal test plans and automated tests - but in some less critical cases, in order to shorten the time-to-market, it may not include all the manual tests that were done in the design phase.
  • Product is released.
Again, many different project models can be chosen, but the big work of writing the unit tests and integration tests, is done in parallel with designing and programming. Notably, integrations between software of different vendors is often integration-tested first, before details are programmed, in order to ensure that there is a connection between the two systems, early, so integration testing is started before unit testing.

I believe that this A-model represents most software development projects better than the V-model.