EU switched to standard time this morning, and as always, this causes trouble. Clocks have to be adjusted, computers can get confused in their scheduling, and IT systems failed during the night. My harddisk TV recorder did not have any TV program after 02:59 but had a proper error message for each TV channel. Here is how you can create software that does not have these kinds of problems:
The first thing to realize, is that all timestamps in your software must contain the offset to UTC. If you have a simple TDateTime, it does not contain that, so TDateTime is simply not good enough. Because the Windows API is focused on timestamps that are not UTC compatible, and because the Windows API was never meant to be used with UTC-offset timestamps in the future or in the past, we can look at alternatives. Linux does a very good job at handling all this, so this blog post will explain how to do it the standard Linux way.
First, transport and store all timestamps as "the number of seconds since xyz", where january 1st, 00:00 UTC is a good choice. Also, dismiss the use of leap seconds, so that all hours are 3600 seconds, that makes things much easier. If you need a better resolution than seconds, use milliseconds or floating point numbers, but integers are really handy in many contexts, so keep it in integers, if you can.
Next, realize that day/month/year/hour/minute/second is only necessary when interacting with humans. So do not convert to date and time unless you need to show it to the user, and convert it from date and time as soon as possible after input. As you may note, the conversion requires a ruleset. This is usually specified as a location, like "Europe/Berlin". This way, the conversion knows about historic and planned daylight saving, and other peculiarities about time. For instance, the October revolution in Russia happened in October in Russia, but it was November in Berlin because Germany had switch calendar system, but represented as integer timestamps, that is not a problem. Until modern times, different countries used different calendars, but even in USA, some states operate with many different rulesets, depending on where you live inside the state.
If you want to show a timestamp to the user, you may consider the special case, where there are two date/time combinations that are impossible to differ without extra information. For instance, Europe/Berlin 2009-10-25 02:30. We had two of these today.
Let's take an example: You want to create a chart, where the first axis shows one day, from yesterday until today. You can specify this in multiple ways:
* Start timestamp, End timestamp
* Start timestamp, duration
* End timestamp, duration
You can choose to say "from 2009-10-24 12:00 to 2009-10-25 12:00", or you can say "from 2009-10-24 12:00 and 24 hours onwards". The first choice actually gives 25 hours (!), so you need to make a choice here. If your chart always shows 24 hours, make sure that duration is specified.
Let us assume that we want to create a 24-hour chart. Then, you can simply find the values for the X-axis from the start timestamp, by adding the desired number of seconds. If the start timestamp is the integer number 1256410800, then just add 3600 (1 hour) 24 times, and you have all the timestamps you need. In order to show the chart labels, you need to convert the integers to human readable text. 1256461200 converts to "2009-10-25 02:00" or just "02:00", but the next hour timestamp 1256464800 is also "2009-10-25 02:00". Your entire axis becomes: 12 13 14 15 16 17 18 19 20 21 22 23 00 01 02 02 03 04 05 06 07 08 09 10 11 (2 times "02" and no "12").
The next problem is, how to save this in a database. Integers are obviously easily stored in integer fields. So, is everything solved now? Definitely not. It is absolutely not easy to debug/read integer timestamps in a database using database tools that cannot convert this to something human readable, and many chart tools do not support time-axis with complex conversion to labels like described above. Linux was built for all this, but Windows isn't. When you need to support daylight saving time properly, things get complex on Windows.
Does Windows provide easy, standardized alternative? Unfortunately, not. That's why we're still struggling with a lot of IT systems that do not support daylight saving time well. However, even if you support it well, you still have the problem of users that may not understand how to handle non-existent and duplicate hours.
Sunday 25 October 2009
Monday 19 October 2009
Google NaCl roadmap
Just in case you haven't studied Google's "native code safer than JavaScript" project, based on this video from Google, here is a very short summary of Google Native Client (NaCl).
Current status is:
* Still under review, but basically works
* x86 32-bit machine code is supported
* Non-accelerated graphics
* Sandboxed
* Delivers full native code performance in your website code. A video decoder delivered as part of a webpage is almost the same speed as a native video decoder for the target operating system.
* Cross-platform runtime library that makes the same native code run on several operating systems
The future brings:
* At least as safe as JavaScript (i.e. run native code off untrusted websites)
* Built into browsers (Chrome and others)
* 64-bit x86 support, ARM CPU support
* Fast 3D graphics using O3D from native code, suitable for 3D games and CAD applications
* Real-time applications
As far as I can see, Kylix and FreePascal can compile for this already, and it seems that one of the next Delphi versions can compile for this, too.
Current status is:
* Still under review, but basically works
* x86 32-bit machine code is supported
* Non-accelerated graphics
* Sandboxed
* Delivers full native code performance in your website code. A video decoder delivered as part of a webpage is almost the same speed as a native video decoder for the target operating system.
* Cross-platform runtime library that makes the same native code run on several operating systems
The future brings:
* At least as safe as JavaScript (i.e. run native code off untrusted websites)
* Built into browsers (Chrome and others)
* 64-bit x86 support, ARM CPU support
* Fast 3D graphics using O3D from native code, suitable for 3D games and CAD applications
* Real-time applications
As far as I can see, Kylix and FreePascal can compile for this already, and it seems that one of the next Delphi versions can compile for this, too.
Thursday 8 October 2009
Parallel programming requires multiple techniques, not just one
There seems to be a huge search out there for the holy grail of parallelism. Some want functional programming to be the solution, others think about other solutions. However, the scale of the problem is often ignored: Parallism is introduced on a huge number of levels, each with different solutions:
On the bit level, we can handle multiple bits at the same time. A CPU can handle 8 bits, 16 bits, 32 bits, 64 bits in one step. The more bits that are handled, the more parallism we have. However, you cannot just use 1024 bit arithmetics and get more speed, there's a limit.
On the instruction processing level, pipelines make it possible to execute multiple instructions with less time between, than the time it takes to execute one full instruction. The CPU simply divides instructions into multiple parts, and executes instruction parts in parallel, so that fewer parts of the CPU aren't running idle. However, this obviously has a limit - and I guess most readers know the tradeoff between pipeline size and speed in games.
On the CPU level, we can have multiple cores, with each their own caches etc. This makes it possible to execute two threads at the same time, although they usually access the same main RAM, and this gives a limit to parallelism... don't expect much additional performance after 8-16 cores on the same main RAM.
In the machine level, we can do NUMA architectures. Multiple CPUs do not share RAM, but can access each other's RAM with reduced performance. If we want massive parallelism, it is required that the CPUs cannot access the same RAM with the same speed. There is a performance hit when the CPUs need to exchange lots of data, so this cannot improve the speed of everything.
On the network level, we can connect CPUs that cannot look into each other's RAM. This can be a worldwide network, but it introduces even more performance hits when exchanging data.
The main focus right now seems to be on the "2-16 core on one main RAM" level. This is not fully solved using functional programming or similar techniques. The NUMA level is completely out of focus because we don't run common operating systems that allow a multithreaded application to be distributed across several CPUs without a common main RAM.
So, when searching for a good solution to the multi-core level, always remember the other levels. It's the combination of all levels that decide the final performance.
On the bit level, we can handle multiple bits at the same time. A CPU can handle 8 bits, 16 bits, 32 bits, 64 bits in one step. The more bits that are handled, the more parallism we have. However, you cannot just use 1024 bit arithmetics and get more speed, there's a limit.
On the instruction processing level, pipelines make it possible to execute multiple instructions with less time between, than the time it takes to execute one full instruction. The CPU simply divides instructions into multiple parts, and executes instruction parts in parallel, so that fewer parts of the CPU aren't running idle. However, this obviously has a limit - and I guess most readers know the tradeoff between pipeline size and speed in games.
On the CPU level, we can have multiple cores, with each their own caches etc. This makes it possible to execute two threads at the same time, although they usually access the same main RAM, and this gives a limit to parallelism... don't expect much additional performance after 8-16 cores on the same main RAM.
In the machine level, we can do NUMA architectures. Multiple CPUs do not share RAM, but can access each other's RAM with reduced performance. If we want massive parallelism, it is required that the CPUs cannot access the same RAM with the same speed. There is a performance hit when the CPUs need to exchange lots of data, so this cannot improve the speed of everything.
On the network level, we can connect CPUs that cannot look into each other's RAM. This can be a worldwide network, but it introduces even more performance hits when exchanging data.
The main focus right now seems to be on the "2-16 core on one main RAM" level. This is not fully solved using functional programming or similar techniques. The NUMA level is completely out of focus because we don't run common operating systems that allow a multithreaded application to be distributed across several CPUs without a common main RAM.
So, when searching for a good solution to the multi-core level, always remember the other levels. It's the combination of all levels that decide the final performance.
Wednesday 7 October 2009
Gaming industry has the next generation GUIs
First, for reference, this blog post is about non-game applications.
Usability and GUI design have been hot topics since the dawn of computers. The trends have always been towards increased usability and productivity. Since the world of IT is much more complex than what can be described using graphics from outside IT, GUI design has introduced a lot of mechanisms and visualizations that you need to learn before you can use a computer. A simple "Close" button on a form is actually not very intuitive, but it is one of the first thing we learn, and it's productive, so we could not imagine IT without it. I still remember once I had to teach an old guy how to use a computer. He had no clue about using a mouse, and I had to explain to him how I would use the name "button" for something on a monitor. Button was not as easy to explain as I originally thought, but try to explain to such a user how to tab through a radio button group...!! I still rarely run into people who do not use computers (!!!), and I still often run into people that are blocked from doing what they need to do on a computer, because of poor usability. Usability on computers will never be perfect, but we can do our best to improve it.
If you add 3D graphics, you can suddenly do a lot of new things:
* Moving parts of the GUI can visualize to the user, where a dialog comes from or disappears to. Think about how Windows or other systems minimize applications. You can also use it to browse in a book of multiple pages.
* Zooming into a picture, a chart or similar can be implemented by actually zooming in on the big GUI. This may not always make sense from a productivity point of view, but it certainly makes sense from a usability point of view.
* Semi-transparent layers on top of a normal GUI can be used to clearly identify a group of controls that belong together but are not located at the same place. When that semi-transparent layer moves just slightly, the human vision immediately recognizes all of the controls on that layer as belong to each other. For example, when zooming in on a chart, the axis controls can be positioned on a different semi-transparent layer, so that the user knows that we're only zooming in on the chart, but the axises are not zoomed, they just change the scale.
The current problem is, that most of us need to create software that works on computers without 3D acceleration, or via remote desktop solutions like VNC that do not perform well with 3D graphics. Also, 3D graphics need to be client-side in order to perform well, because server-side applications currently have bandwidth and latency problems.
However, once 3D graphics can be assumed to be available everywhere, the next problem is, how do you design GUI components for this? If you want to design a chart component that allows you to zoom into the chart by "moving towards your GUI", then your chart component needs to affect things outside the box that it was placed in. A good user experience in 3D requires a new infrastructure for writing GUI controls.
A good place to get inspired, is to look at 3D computer games. Imagine that you're writing a database application, with charts, grids, events, popups, comboboxes etc., and then start to look for useful GUI components in 3D games. Computer games experience a much higher pressure for usability, and at the same time there is a pressure for inventing new mechanisms and new paradigms. First person shooters are probably less useful to look at, but many other games are full of interesting solutions to simple usability problems.
Usability and GUI design have been hot topics since the dawn of computers. The trends have always been towards increased usability and productivity. Since the world of IT is much more complex than what can be described using graphics from outside IT, GUI design has introduced a lot of mechanisms and visualizations that you need to learn before you can use a computer. A simple "Close" button on a form is actually not very intuitive, but it is one of the first thing we learn, and it's productive, so we could not imagine IT without it. I still remember once I had to teach an old guy how to use a computer. He had no clue about using a mouse, and I had to explain to him how I would use the name "button" for something on a monitor. Button was not as easy to explain as I originally thought, but try to explain to such a user how to tab through a radio button group...!! I still rarely run into people who do not use computers (!!!), and I still often run into people that are blocked from doing what they need to do on a computer, because of poor usability. Usability on computers will never be perfect, but we can do our best to improve it.
If you add 3D graphics, you can suddenly do a lot of new things:
* Moving parts of the GUI can visualize to the user, where a dialog comes from or disappears to. Think about how Windows or other systems minimize applications. You can also use it to browse in a book of multiple pages.
* Zooming into a picture, a chart or similar can be implemented by actually zooming in on the big GUI. This may not always make sense from a productivity point of view, but it certainly makes sense from a usability point of view.
* Semi-transparent layers on top of a normal GUI can be used to clearly identify a group of controls that belong together but are not located at the same place. When that semi-transparent layer moves just slightly, the human vision immediately recognizes all of the controls on that layer as belong to each other. For example, when zooming in on a chart, the axis controls can be positioned on a different semi-transparent layer, so that the user knows that we're only zooming in on the chart, but the axises are not zoomed, they just change the scale.
The current problem is, that most of us need to create software that works on computers without 3D acceleration, or via remote desktop solutions like VNC that do not perform well with 3D graphics. Also, 3D graphics need to be client-side in order to perform well, because server-side applications currently have bandwidth and latency problems.
However, once 3D graphics can be assumed to be available everywhere, the next problem is, how do you design GUI components for this? If you want to design a chart component that allows you to zoom into the chart by "moving towards your GUI", then your chart component needs to affect things outside the box that it was placed in. A good user experience in 3D requires a new infrastructure for writing GUI controls.
A good place to get inspired, is to look at 3D computer games. Imagine that you're writing a database application, with charts, grids, events, popups, comboboxes etc., and then start to look for useful GUI components in 3D games. Computer games experience a much higher pressure for usability, and at the same time there is a pressure for inventing new mechanisms and new paradigms. First person shooters are probably less useful to look at, but many other games are full of interesting solutions to simple usability problems.
Subscribe to:
Posts (Atom)