Sunday 19 December 2010

The future of C#, the bloat explosion, and what happened instead

Looking back on this video with Anders Hejlsberg, about the future of C#, from march 2009, or this blog post about the upcoming bloat explosion, seems quite awkward with the current explosion of iOS and especially Android. There are numerous awkward parts in the video: The focus on objects, the "huge amounts" of memory and the statement that multithreading is the exception.

As readers of this blog know, Google has a famous quote in their designing for performance article about programming Java for Android: Avoid creating objects. Also, on mobile platforms, there is not plenty of memory, in the contrary, and multithreading is a vital necessity in order to achieve a reasonable responsiveness. Energy efficiency has become very important, and Apple even introduced a ban on runtimes. Also, Python and PHP is on the rise, it actually makes sense to raise venture capital for PHP-based projects these days.

It seems that Microsoft and Anders have simply planned for the wrong future - assuming that the trend would continue, and that complexity, bloat and computer sizes would continue to rise. However, the world has gone low-energy, low-complexity and small computers, instead.

I wonder how such a video would look if Anders was interviewed today.

Friday 10 December 2010

Android sells better than Windows to consumers

If you combine a few news articles, things get interesting:

"Gartner forecasts that worldwide PC shipments for 2011 will reach 409m units" = 1.1 million per day. It is fair to expect less than 40% of these sales to be for consumers. This is an 18% growth, meaning that in 2010, we can expect PC sales to reach 138 million for consumers, or 379.000 per day. Worldwide.

"Google says 300,000 Android devices activated daily"

My guess is, that in many countries, Android is already selling more than Windows PCs to consumers, and the number of countries where this happens, is increasing. Overall, we can expect smartphones as a category to outdo PC as an entire category, in 2011:

"Smartphone Sales To Beat PC Sales By 2011"

I just bought an HTC Tattoo for the sole purpose of using it as a server (small batch jobs, surveillance, remote control via SMS) in my home, and I wouldn't wonder if we would see more home servers running Android. It is very easy to set up and configure.

Friday 26 November 2010

GTK+ app interfaces using HTML5

This impressive demo of a web application deserves more publicity. I am quite sure that this is not the last GUI toolkit that interfaces with generic web browsers.

Sunday 21 November 2010

R.I.P Microsoft Windows?

Microsoft is usually very good at presenting new products years ahead of the actual launch - but there continues to be a very remarkable absence of a single strategy for support of Windows applications or Windows as a well integrated desktop.

Android provides many improvements that Windows does not offer as part of the standard platform:

* Easy app discovery and installation (Android Market)
* Easy complete app removal
* Easy and tight Integration of phone book, GIS, messaging, online accounts etc.
* Automatic light control of the display
* Removes the need to terminate apps from the user
* Location
* Sandboxing
* Removes the need to think about file structures, which most users don't seem to grasp.
* Instant on
* OTA OS upgrades

In the future, Microsoft may provide many of these features, too, but it seems that it will take many years, because:

* Windows 8 seems to contain some technologies intended for improving on the mentioned deficiencies.
* Windows 8 is planned for 2012.
* It may take longer than that, before Windows 8 is out, based on Microsoft's historic performance.
* It often takes customers years to upgrade their Windows clients - some large organizations are still installing Windows XP. We might therefore not see a general deployment of Windows 8 before 2014 or much later.
* Microsoft does not make much PR about how to write client-side apps for Windows, how to future-proof them and certainly does not make it easy. Delphi is still king of doing that.
* Microsoft does not seem to prepare a platform for Windows apps that can be installed on Vista, Windows 7 or even Windows XP.

Don't be fooled about the press focus on touch devices; according to Paul Thurrott, Apple is still gaining market share on laptops, and Google is preparing a mouse-based OS, too (Chrome OS). Chrome OS seems to contain many of the features in the first list above, including the concept of installed apps.

Does this mean that Windows is dead? No, not in business, and not on the server. Some hardcore gamers and those that want "the same at home that they have at work" will still keep it, of course. But frankly, once Google Chrome OS is out, there isn't much advantage in buying a Windows PC, if you are an average consumer with no special preferences for specific software packages. Some may even just use a touch tablet with a keyboard.

So, how will the business desktop look like 5 years from now? My best guess is: fragmented.

Sunday 3 October 2010

TIOBE index decomposed

This post presents a different way to look at the TIOBE index. Of the top 20 languages, I exclude non-generic languages, like SQL or MATLAB. The next is to group the languages by performance characteristics:

Compiled languages that produce very fast apps:
* Ada
* C
* C++
* Delphi, Pascal
* Objective-C

Garbage-producing languages:
* C#/
* Google Go
* Java

Languages that produce slow apps:
* Javascript
* Lisp
* Perl
* Python
* Ruby

When I look at products, which I consider successful, fast and slick, I see:
* Apple uses Objective-C
* Linux kernel and tools (incl. Android, Google servers etc.) use C/C++
* Webkit, Google Chrome use C/C++
* Microsoft uses C/C++
* Several successful vendors with great software in my industry use Delphi

I see problems with:
* Microsoft .net and Java apps tend to be bloated and slow
* Java apps on Android are not as slick as Objective-C apps on iPhone
* C#/ programming requires that you are willing to bet your investment on Microsoft, who has lost major market share in many form factors
* Large apps or frameworks built on PHP or similar, simply fail to deliver

Looking into the future, where we will need one language to write one app that runs on multiple CPUs with each their own RAM, or NUMA, the compiled languages will do just fine, but the garbage languages need to be replaced with something that handles memory allocations differently.

Saturday 10 July 2010

What Nokia must do to stay relevant in mobile

Nokia is losing market share fast, which is being discussed in many places. Nokia's CEO Anssi Vanjoki also expressed his view on this topic.

The number of mistakes, that Nokia currently does, is huge, let's take a few:

* Focus on OS technology instead of customer-centric parameters. As Anssi puts it: "The current phase of MeeGo development is looking awesome."

* Believing that the window of opportunity for game-changing devices is still open. Anssi still believes that Nokia can outsmart Apple and Google, creating killer phones and market-changing mobile computers.

* Totally cutting off the pad/tablet form factor, as Anssi puts it: "the computers of the future will not be tied to a desk or even a lap – they will fit in your pocket"

* Putting the bets on multiple platforms (Symbian doesn't seem entirely dead, yet, from a product perspective)

* Demonstrating bad products. At the Open Source Days in Copenhagen, Nokia demonstrated their Maemo based phone, which enables you to have a long conversation with the Nokia representative while it is starting the maps application.

Nokia still delivers most phones, and in some countries they are absolutely huge, delivering a lot of value for a low price. However, almost all early adopters of smartphones in Europe and USA seem to have moved towards Android and iPhone, and statistics shows, that Nokia smartphone users aren't using the internet part of their smartphones nearly as much as Android and iPhone users. In other words, if the phone market would be segmented by amount of internet use, so that smartphones are the phones on which the users have heavy internet use, Nokia would be almost a non-player in the smartphone market.

However, all is not lost. Nokia can produce and deliver real smartphones to their existing and loyal customer base, and if a significant part of these switch to Nokia smartphones, Nokia's market share will jump to very high levels.

However, Nokia needs to start doing a few things right:

* Realize, that if a large organization like Nokia cannot learn quickly enough during the last 4 years, they will also be slow during the next 4 years. Nokia is big. Instead, buy other companies, and let their technology become the new big thing, instead of doing everything in-house. Normally, the largest player in a market wouldn't do this, but if they don't, they won't be the larges player in the future.

* Realize that women will want a small stylish phone and a 4-8" screen in their purse.

* Understand that the consumer doesn't care about operating systems. To the consumer, Android is not about Linux, it's about the Android Market. By promoting Maemo and Meego with an empty app store, Nokia doesn't build up momentum, it's actually destroying the value of the Maemo and Meego brands. Every time a consumer asks a friend for advice about Maemo and Meego, the answer is: "Don't touch it, that's not where the apps are". These advices are remembered for many years, and ads don't change that.

* Realize that functionality is the key to buying a phone, and that it takes years to build a good app market. A facebook app is not enough, you also need the Tour de France apps, the local bus company's app, the university campus apps etc. Developer mindshare is important, and that is probably what Nokia has been losing at the fastest pace.

* Realize, that the real innovation done by Apple, is to make the touch user interface for mobile phones. Even the original Palm Pilot had apps and a similar application chooser. This innovation has started a chain of innovations that just continues in the fiercest competitive environment that the world has seen for a long time, with whole value chains competing for innovation on all levels. Nokia has no chance to outperform this competitive environment significantly, and will have a very hard time to create a killer phone or a game-changing product, if not impossible.

* Realize, that online services are the key to apps. All mobile devices can do games, but the real value to a phone comes, when an online service is available as an app, providing device sensor integration, a good touch UI, offline / bad network capabilities, app/share integration etc. Nokia tries to deliver their own maps product, their own e-mail system etc., but in order to become the biggest map provider, they also need to compete on the computer desktop for the user's attention - being only a map provider on the phone is a losing strategy. But Google is a very large enemy here.

* Realize, that it makes sense to be a huge company that does everything from producing hardware to software, in the mobile industry of the 1990s. But in the 2010s, the business model doesn't favor that. It is similar to the steel industry - they needed to be huge once, when capital costs were huge, but when the cost structure changed towards variable costs, the biggest steel industries got serious competition from very small players that they could not compete against. Nokia should look into the steel industry and learn from that. Quickly.

Friday 2 July 2010

Apple iPhone 4 signal strength indicator highlights a common problem

It is old knowledge that if progress bars go faster at the end, the user is happy. In other words, if the progress bar is modified so that it doesn't show the perfect progress percentage, you get a better customer satisfaction.

The same principle applies to other indicators, like battery indicators and mobile phone signal strength. Many phones don't seem to lose battery energy until the very last moment, where it drops fast. Apple has now communicated, that their antenna problem actually isn't that bad, but their signal strength indicator is very sensitive at the coverage levels where this problem was demonstrated. Apple will now "adopt AT&T’s recently recommended formula", which should improve on the problem. That doesn't change the fact, that a piece of rubber can improve the iPhone 4 significantly, of course.

The Google Nexus One has a different approach on battery indicator than most: It starts to drop pretty fast after charging to full level, but when the battery indicator is low and red, you actually still have a lot of energy left - 20% in the indicator means about 20% to go.

What is the best solution? There is a commercial side and a usability side of the problem. The commercial side depends on your business model, and I won't get into that here, but the usability side actually doesn't give a clear answer, either. We have several processes that we want to support:

* If the user needs to plan usage of a limited resource for a specific amount of time (e.g. battery energy for one day), the indicator needs to progress during the entire time span.

* If the user normally doesn't care about usage of a limited resource (battery energy), but may end in a situation where the limited resource is sparse (battery almost empty) and then starts to care about it, the progress indicator should progress little during normal use and most when resources run out.

* If the indicator is used to indicate chances for downtime (e.g. signal strength), it should be most sensitive for high downtime probabilities.

* If the indicator is used to indicate rate of energy usage (e.g. signal strength), it should be most sensitive at the rate that is used most frequently. This may be in the upper or lower end, or in the middle.

* If the indicator is used as a provider of a value, on which the user wants to do calculations, the indicator must reflect reality in a way that is easy to interpret. For instance, if you have 5 bars for battery, each could represent 20%. Or if you have 5 bars for signal strength, each could represent a factor (constant amount of dB).

In other words, there is no perfect solution, it will always be a compromise.

Thursday 17 June 2010

Poul-Henning Kamp popularizes algorithm research

"Think you've mastered the art of server performance? Think again." Poul-Henning Kamp recently published this article with the quote, which basically says that a binary-heap tree structure is inefficient and should not be used. Poul-Henning (PHK) has recently spent more of his spare time on blogging and making his views known to the public, and he writes articles that often spark great debates.

In this particular case, PHKs knowledge about virtual memory and high performance in cache-intensive applications makes him complain that so many fellow programmers consider a binary tree in memory efficient. However, the counter-argument is that this is known knowledge among algorithm experts, and some actually complain that PHK's article was published in ACM despite of that.

However, how many of you have attended an algorithm conference lately in order to get updated with the latest news? Probably not many. And even if you dig into algorithm science, the sheer number of algorithms for rare cases is overwhelming. PHKs article may not benefit many directly. Many programmers don't use much RAM, develop in frameworks that anticipate that there is more physical RAM than needed (Java, .net), or don't want to invest time into extra performance. But it does illustrate how programmers should innovate their algorithms while solving problems, and the article definitely applies to Delphi programming.

Many great things were achieved because somebody tried to do things just a little better.

Monday 24 May 2010

Sandboxed vs. Trust, sandboxing wins

One of the most basic principles on Windows, in order to prevent malware, is that you trust your software provider. There are many trust mechanisms, including driver signing, website signing, remembering whether a file was downloaded or was produced locally etc. The idea is, that if you can trace the origin of the file, you can make the provider liable. If you need to break the trust chain, for instance if you download an unsigned setup.exe file that you want to install, you need to confirm that you want to do that.

On the other hand, we have sandboxing techniques. JavaScript originally had a lot of access to the local browser, for instance, one web page could access info from another web page. Obviously, this had to be changed, each untrusted web page must be isolated from other web pages. Google Chrome even added a second sandboxing layer inside the code, so that if there are bugs in the JavaScript API implementation, the JavaScript could probably not break through the second layer. This has given Google Chrome a very good security record. Other IT technologies also implement sandboxing or are in effect sandboxing: virtual machines, managed code. Google NaCl can also be considered a kind of sandboxing, although it's basically native code "at least as safe as javascript".

Web apps have grown to be extremely popular, despite being expensive to develop and often with low usability. The main reason is that they are very simple to deploy: you don't need to install anything and it works everywhere, and data is saved on the server. This is not entirely true: You need to install an adequate browser, which means that it doesn't work everywhere, and even server/client software often only stores data on the server. So, why are web apps different?

Here is the key: Web apps are usually much easier to install/configure, and you can use web apps from untrusted sources.

Both iPhone and Android solved these parameters for local apps: It is almost easier to install an app than to bookmark a good web app, simply because the search for a good app is guided using the app store/market, and installation is as simple as bookmarking a web page. Using software from untrusted sources is handled using sandboxing. I have no idea if the provider of my currency exchange rate app is evil or not, but it cannot access anything except the internet connection, so it cannot be used to wiretap on me. I am completely sure that it will not know my location, for instance. Android Market still requires signatures, but it does not need to be a signature with trust - the security is in the sandbox.

In addition to being at least as easy as web apps, iPhone and Android apps provide better usability because they are designed for the user interface hardware, and they provide better functionality because they can work offline and with access to hardware, if you permit it.

Most Android and iPhone users that I know, have more apps installed on their phone than on their computer. The reason is not that the apps on the phone are smaller - but there is an inherent problem with Windows app: If you don't trust them, you won't install them on a computer that needs to be safe. Many IT departments lock down Windows computers so that you cannot install anything. Many employees in large organizations don't have the possibility to install even small simple apps, so web apps are their only choice - the browser is the only widespread sandboxing system on Windows.

The future is sandboxed.

Saturday 24 April 2010

How Android beats Windows 7 laptops

Lately, many have started to use their phone for tasks, for which they previously used a PC - including updating online spreadsheets etc. The PC features much better input and output devices (keyboard, mouse, screen), so why use the phone for these tasks? Here is a list of why it makes sense to use an typical Android phone instead of a Windows 7 laptop for many tasks:

* The laptop does not have built-in GPS, so location information cannot be used to assist tasks by searching locally, looking up addresses, providing context-dependent functionality etc.

* The laptop does not have a log over what has been done lately while away from the PC. So it does not know who the user has been talking to, where the user has been driving, which destinations/shops the user has looked up etc.

* The laptop cannot easily exchange information via SMS or MMS, so in order to quickly exchange a link to someone that you have just been talking to, things become a lot more complicated than if you just grab your android phone.

* The phone is always turned on and online. This means that some tasks are usually done on the phone, and even if the laptop should be online, it is easier to keep with the usual way of doing things.

* Both devices have reflective screens, but it is much easier to avoid reflections on the phone, because the laptop is more restrictive on work positions.

* It is much easier to take a picture of a piece of paper using the phone than a laptop's built-in webcam, besides the fact that the webcam usually does not have an adequate resolution. This means that some information simply starts being created on the phone, not on the PC.

* Windows does not have a market place that makes it easy to find productivity enhancing apps - and installing software on Windows is much more tedious, too.

* Windows does not sandbox applications. If you install the whiteboard photo & filtering & email app on Android, it cannot access your files, but it is not easy to prevent file access for a similar app for Windows.

Sunday 11 April 2010

In defense of the iPhone 4.0 SDK section 3.3.1

There have been many blog posts about the new iPhone 4.0 SDK which do not like section 3.3.1 of the developer license agreement, which says:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

Apple's introduced restrictions on original programming language, which has been considered an attack against Adobe Flash. However, there is more to it. The market for smartphones has been very diverse for a long time. Microsoft has attempted to standardize smartphones on Windows Mobile, which failed because of its power consumption and because it was always a bit late to the market. Apple made a huge hit with the iPhone, because they combined new technologies that made it possible to give a great smartphone experience, earlier than what would otherwise have been possible. They were rewarded for innovation.

Now, that everybody is catching up on the basic technology, like touchscreens and sensors, other software vendors are seeking ways to standardize the ways that software is written for many platforms. There is a huge saving if software vendors can create one application that can be deployed on iPhone, Android, Maemo, Bada and others, and on many different form factors. What would the result be for Apple? They would just be one of many, and they would probably not have any significant advantage over the competition. This means that their margins would drop, in a market where prices are already going down, and they would earn a lot less money. Apple does not want that, of course.

For the consumer, that would mean a lot of apps, which do not exploit each platform well - the feature lists would focus on the lowest common denominator for the target platforms, and few platform-specific features would be used for marketing apps. It would be hard for phone platform developers to add new capabilities that the developers would love to use.

Apple's strategy is to separate itself from the apps on other platforms, giving the user a unique and different experience, which they believe will be better than usual. If you have three features, and Apple does the first two perfectly but not the third, and the competitors do all three in a mediocre way, many consumers will pick the Apple product. A good example is the HP Slate: It supports flash, but nobody cares, consumers still want the iPad.

The iPhone and the iPad will never become devices that can do everything. Of those who had this expectation, many will be unhappy about Apple's latest move, but anyone who wants innovation in smartphones, great user experiences and real choice, should be happy about the new section 3.3.1.

Saturday 10 April 2010

Google Android reviewed by a Delphi developer

With the major part of my background in Delphi, but also some in C, C++, Assembler, Java, C# and others, I threw myself at the Android platform in order to figure out how it works.

The main language is Java, the obvious IDE is Eclipse, and after installing the Android SDK things are well integrated with each other. The fundamental structure is very similar to Delphi - but things have different names and formats, of course. Forms are called Activities, DFM files are XML files, there is a manifest XML-file like in Windows, a message queue like in Windows, and each application has a main thread that receives messages and is reponsible for updating the GUI (don't update the GUI from other threads).

The platform comes with some APIs. For storage, we have a file system (SD flash ram), an app-specific database (SQLite-based) and a preference storage (like .ini files or registry). There are also APIs for network access, location, sharing data between apps, phone calls, user interface integration etc.

Generally, things are straightforward, but Eclipse isn't as smooth as Delphi. Examples are usually missing from the documentation, so Google and sample apps are great friends - and even though the form designer does integrate with the IDE so that you can design something graphically, it's not as simple as putting a button on a form and double-clicking it, in order to create a "hello, world" app. One nice thing is that you only have to design one user interface, and it looks ok on many different phones with very different screen resolutions. You can also adapt the user interfaces to different resolutions, of course, especially when rotating the phone, but many standard user interfaces immediately work on all screens.

You need to programmatically attach event handlers etc., and generally, the form designer is not as neat as a typical form designer for Windows apps. Since the screen is usually very small on a phone, the user interfaces usually try to solve only one problem, like showing a list, editing one record of information etc. Therefore, there are special user interface classes for these purposes, like a "ListActivity" which is basically a form that focuses on supplying a list to a user. One of the benefits of this, is that the form then has handler functions, like OnListClick, which are implemented using class inheritance.

The Java language still does not have "procedure of object" constructs, so some event handling stuff is more complicated, like using inheritance, extra classes etc. Also, Java still feels a bit behind Delphi 2009/2010 in several aspects, but on the other hand, Java has better features in some areas, too. One interesting thing is the Google advice "Avoid creating objects" in the Android developer guide, in the "Designing for performance" section. Considering that almost all API functions require you to pass a recently constructed object, you need to create an huge amount of objects compared with typical Delphi apps, so I guess the advice tries to say, only create huge amounts, don't create extreme amounts of objects.

Performance is definitely a key issue when creating mobile apps. Performance is not so much about instructions per second, but more about responsiveness of the GUI and battery usage. Many of the things that you would want to do on a phone requires external systems to react, and they may not react immediately. For instance, a GPS fix takes time, and looking up an address over the internet can take a long time. Such things need to be handled using separate threads on Android, so if you don't master multithreading in Java, this is one of the first things to learn. Fortunately, it is not that difficult, so don't let it scare you off, Java does many of the things you need, in easy ways.

Debugging is great - the SDK includes an emulator that can emulate different kinds of devices, and you can even simulate incoming phone calls and see stacktraces from the emulator inside the Eclipse IDE. I see no reason why you should need hardware in order to develop Android software, except that I couldn't make GPS simulation work. It's probably just an option somewhere that I didn't find.

Besides a great Delphi-like form designer, one thing I really missed was a non-SQL local database similar to BDE or DBISAM. It seems to be overkill and a waste of CPU cycles to use SQL on app-specific databases that are single-user and cannot be shared by multiple applications - and somewhere I also saw some benchmarks that proved that SQL is a bad idea in this context. Data sharing to other apps is handled using Content Provider interfaces and not SQL, but that is outside the scope of this blog post.

Publishing to the Android Market is extremely simple. There is a guide for it in the Android Developer Guide, and if you follow that, your application is worldwide available immediately, and the market reports, how many phones downloaded your app and how many still have it. Upgrading is also quite simple: Just upload a new version of your app, that's it, and all users will be notified next time their phone checks for upgrades.

If you know the Java programming language and Eclipse, I would say that it should take less than a day from scratch, to install Java, Eclipse, the SDK, to create a simple app and publish it on Android Market. There are no expenses for the tools, but it will cost $25 as an initial fee to sign up as a publisher for the Android Market. I guess this fee is meant as a kind of spam-protection and also makes it possible to trace your identity, if needed.

Saturday 3 April 2010

Bullshit Bingo for programming

A Bullshit Bingo card for software developers:

Remember the rules, you need 5 in a row before you win!

Wednesday 24 March 2010

Nexus One: touch screens do not solve all problems

When I first learned about Android and iPhone, it was fascinating to see, but I didn't leave my old Nokia phones before I saw somebody demonstrate to me, that it was actually possible to operate these phones with one hand. My Nexus One is still mostly handled with one hand, meaning that I don't use multitouch pinch-to-zoom much, even though it is available, instead I use the zoom buttons on-screen.

However, I still cannot type on my Nexus One with one hand, without looking at the on-screen keyboard. Also, physical keyboards produce higher typing speeds than on-screen keyboards, so the current solution isn't optimal. Also, the Nexus One only has 3 physical buttons: Volume, Activate and the trackball. Everything else requires you to look at the phone. This choice is not optimal: while handling the phone in an unlocked state, you may easily adjust the ringer volume by accident (!), so there are applications that restore the ringer volume, in case you do this. Also, it takes much more time to activate and unlock a Nexus One, than to unlock a modern Nokia phone. The phone is great, no doubts about it, and Nokia is seriously behind these days, but there is still improvement to be made.

In a recent CNet review of the Kindle and the Nook, the reviewer gave Kindle significant more points on usability, mainly because it had dedicated buttons to support the operations that the user did most, whereas the Nook's touch-screen was not as easy to operate. Considering, that we get more and more devices, it is also fair to assume that the devices get more and more specialized, meaning that it will become increasingly easy to identify frequent operations that should be supported using special buttons, and that the best devices will get more physical buttons. Who knows, maybe we will see bluetooth remote controls for Android phones, that enables you to do calls and write SMS messages without taking the Android phone out of your bag. I would buy one.

Wednesday 17 February 2010

Why almost nothing revolutionary comes out of Europe

Recently, I overheard this one: "why is it almost nothing revolutionary comes out of Europe?" This question basically expresses a perception that can be explained.

Most tech companies in Europe don't target the world market - they target one country first, and when they dominate that, they invade the next countries. This is the reason why has a huge market share in Germany, and is present in USA, but is totally unknown in Denmark, which borders Germany inside EU.

Many companies dominate their own country, but do not have presence in other countries. Difference in languages, law, culture etc. separate the markets. These companies often either produce something better than the largest international companies, or ends up being destroyed in the competition. Many countries had dominating Word processor app companies, but Microsoft Word killed most of these. In other words, if the local company doesn't make a greater product, it doesn't survive. Therefore, the local population often considers local companies to produce better products than other countries. A good example is, that a typical sink faucets from USA would never have a chance on the Danish market, because we are used to more advanced systems with thermostats etc. To many Danes, it is annoying when the cold-water and the warm-water taps are separate, and to many it is annoying if the shower isn't thermostat-controlled. Also, Danes use more expensive wall sockets for electricity, than Germany or Sweden. This makes a house 0.05% more expensive, but better-looking, and few seem willing to save that money.

There are many revolutionary products from Europe, but they are probably not as highly profiled in English media. Many products are kept out of the U.S. market, for various reasons, and some get a delayed entry. When I recently went into a T-mobile shop in New York, and explained that I was from Europe, the immediate reaction was: "Oh, that's where all the cool Nokia phones are." If the initial launch isn't done in an English-speaking country, it may be old news once it gets to an English language country. In the meantime, a latecomer may have entered (or started in) an English-language market, so that the original inventor looks like the late-comer. In similar ways, products from USA that may be first-movers internationally, sometimes look like late-comers in other markets. Which phone came first? Google Nexus One or HTC Desire? Where I live, it seems that HTC Desire comes first.

You need to read news in other languages than English in order to read about tech from Europe. Microsoft thought that Great Plains would have better software than a company from Europe, but changed their minds, and is now basing their business on European software they didn't understand the power of, until they looked more closely at it.

Saturday 30 January 2010

Will Apple accept Google NaCl on iPhone, iPod, iPad?

Apple did it again - they released a great product called the iPad. However, the main reasons that people buy these products, is that it is good quality and they do things better than the competitors. However, in order to protect the business model of Apple, and in order to ensure a good overall consumer experience, Apple restricts what applications can run on their devices. Google Voice was not initially accepted in the Apple app store, but Google has a great interest in making sure, that their services are universally available. Their solution was to create a Google Voice web application, that does the job, and now Google Voice is available on the iPhone.

Google Native Client expands the capabilities of the web, so that web applications can contain codecs, 3D graphics and code that requires a lot of computations, and still run well. HTML5 includes offline capabilities, and in combination, there is a complete framework for writing offline native apps delivered via the internet. The app store will no longer be needed by those that provide apps for free, unless they have very specific needs for access to local hardware.

Apple has a choice, but if they refuse Google NaCl, they will make their products to do less than the competing Android and Chrome OS products. Apple's products will be inferior. This will work as long as there are no significant apps for Google NaCl.

Google Chrome OS includes NaCl and is basically a lowest common denominator OS, where all the applications for it can also run on other OS'es. By providing Google Chrome for Mac+Windows+Linux, Google provides a platform that is large enough to make it interesting for app developers. It's standards-based, it's Open Source, it's easy and capable, it's free, it's cross-platform and it is huge. Apple can be in or out.

Apple is free to pick the default choices on their devices, and can limit access to local storage for web apps. But competition is catching up, quickly, and Apple needs to invent more. I would not be surprised to see Apple become very active in delivering a cohesive webservice offering similar to Google's. What's next, Apple buying Yahoo? Does Microsoft have anything to offer, or have they completely lost the consumer market?

Saturday 23 January 2010

VMware converter review

Recently I upgraded to a newer laptop, and considered to virtualize my old laptop, because there are so many settings for software development, that are not easily copied. This includes search paths that depend on the drive configuration, installation of tools (cygwin etc.), and many other things. In order to be 100% productive at all times, I simply need the old laptop to be available until the new one is 100% ok.

This is possible using VMware and VirtualBox, but there was no full description of prerequisites and consequences, so I just went ahead with the solution that seemed simplest: Using VMware converter and VMware player. This is a short resume.

VMware converter can convert a physical PC into a VMware image, ready to be played inside VMware player. This enables access to your old PC from the new PC, in a window, which can be great until you have moved all the settings, files, and installed all apps onto the new PC. The first thing you need to do is to identify a PC that is suitable to receive the result of the conversion. It must:

* Have enough free harddisk space to keep the image of the old computer. If your old laptop had a 100GB harddisk with 70GB allocated (30GB free disk space), you should have at least 80GB available.

* Have the ability to share a directory as a windows file share. Not all PCs are set up for this.

* Have the ability to let the VMware converter app listen on port 80 and port 443. It is possible to use other ports, if you already have a webserver installed.

* It should use wired network, because then it only takes hours instead of days.

Your old computer should also use wired network, of course, and have a wired power connection (not running on battery) before starting. You need to install the VMware converter on both computers, and then the rest is rather easy: It is installed as a service on the "server" and on the old PC you are guided through the process. After a while, maybe 30 minutes, a progress indicator will tell you how long the entire process takes.

My old laptop was fully TrueCrypt encrypted, and I chose to unencrypt it before the conversion, just to make sure that it did not interfer with the conversion process. I am not sure that this is necessary, though.

When the process is finished, you have 2 files on the server, and you can uninstall the converter again. You can then move the files to your new PC, install VMware Player, and start using it. My old laptop was using Windows XP, and when I started it, it used 640x480 resolution in 256 bit color or something like that, and required to be re-activated. It took ages for it to load, but after installing VMware tools, it worked smoothly and nicely, with full screen resolution and full color, except that the bridges VMware networks connections did not always work. After switching to VMware NAT connections, that problem was solved.

Windows XP has to be reactivated within 3 days after doing this, because of the significant changes in hardware. I reconfigured the amount of RAM it should use, CPU settings, hardware settings etc., so that I had the final hardware configuration before reactivating Windows XP over the internet. That worked perfectly, and I could archive the old laptop. After a week, when I was completely sure that everything worked perfectly, that I didn't miss anything important, I retrieved the old laptop and wiped it by installing Ubuntu.

Except for the bridged networking problem, that was easily solved by switching to NAT, and the lack of pre-process documentation, everything went perfectly. I will do this again next time.

ANSI in Delphi is not about the character set ANSI

One of the most frequent misunderstandings that I have seen about the Unicode migration of Delphi, is that many consider the Ansistring of Delphi 2007 and older, and all the Ansi functions in the APIs, to be about the ANSI character set.

If you live in USA or any other country that uses Windows-1252 (aka ANSI) as the default character set, it all fits together: ansistring contains strings using the ANSI character set. However, in the rest of the world, things are much more complicated. The default 8-bit character set in Windows is not Windows-1252 in countries like Greece, Hungary, Russia, Japan, China etc. These countries use letters that need values >=128 for their encoding, or sometimes multiple bytes. This means that:

* Document filenames inside ZIP-files probably use characters that are not shown correctly if the zip file is opened on a U.S. computer

* Uppercase() and similar string operations does not work on normal ansistring texts.

* Simple Windows text files are not compatible with PCs from countries that use other character sets.

* Ansi* functions exist, but don't use the ANSI character set

For Delphi 1-2007 developers, it has always been important to use uppercase(ansistring), lowercase(ansistring) etc. for machine-readable text (identifiers etc.), and AnsiUppercase(ansistring), AnsiLowercase(ansistring) etc. for all human text (text from TEdit etc.) in order to have an app that localizes well. AnsiUppercase will use the current local character set for its conversion, no matter whether it is Windows-1252, or not, so that Uppercase('æ') becomes 'Æ' etc. Basically, all the functions that are prefixed with "Ansi", are the locale-sensitive versions, whereas the functions without the Ansi-prefix, are useful for machine readable stuff, where it needs to be 100% deterministic and locale-independent.

This also means, that all string variables in a good app would either contain locale-independent strings, or locale-dependent strings, but not both. It was important to make this distinction in order to know, whether to use Uppercase() or AnsiUppercase() on the variable.

With Delphi 2009, Unicode is often mentioned as a localization thing, so many people struggle to get this right. However, it's still the same problem: If your app is only meant to work in USA, you can disregard all the localization stuff, and it's VERY easy. If your app was already well internationalized, the conversion to unicode is also rather easy. It only gets really complicated, if your app was not internationalized, and now you want it to be. But that's not about Unicode strings - it's about internationalization.

Saturday 16 January 2010

English - the superlanguage

My first programming language (ND80) saved all identifiers by reference in order to save RAM, which was scarce. Using the swap instruction, it was possible to replace any identifier with another, so basically, the entire, programming language could be translated to Danish, my native language. Sounds ridiculous, right? It was. Later, Microsoft did the same: Excel functions were translated to Danish, and even VB programming was Danish-ified, COM APIs were localized etc. This caused a huge amount of problems - it made support difficult, it made it difficult to find help on the internet, localizing APIs meant that some apps did not work with MS programs that were localized to other languages etc. Of course there were workarounds and solutions for most of the "problems", but the problems were real and sometimes caused real havoc. One of the 5 Danish regional administrations just introduced ODF as standard format for document interchange between MS Office 2003, MS Office 2007 and OpenOffice, because this solves problems like date format problems (ddmmyy in some, localized ddmmåå in others). It will not be solved fully, because if you have an expression in a spreadsheet where 'ddmmåå' is part of it, it may not work in a non-Danish spreadsheet at all, no matter how you save it. The easy solution was to do everything in English, using U.S. notation (decimal dot instead of decimal comma) etc. I guess everybody now realized that this is the way to go for source code, APIs, XML files etc.

However, in the recent years, evolutions in the internet has expanded this problem. Humans are increasingly interfacing directly to software, specifying parameters. The most common interface is the search engine. How do you explain it easily to a 6 y.o. how the angle of Earth's rotation axis creates summer and winter? Youtube, of course. But don't use Danish words for your search, it will probably not yield a single good result. So, even though my daughter can write on a computer, she still cannot use youtube. She doesn't know English. I encounter this problem many times per week.

The problem is not just limited to searching. Many electronic devices are not localized, a lot of software is not localized, and what language do you use on Facebook if your friends don't all understand Danish? Wikipedia is another good example: The absolutely biggest wikipedia uses the English language, and it is 3 times bigger than number 2: German. Wikipedia has become a significant provider of information, and you simply need to know English to use it.

In order to understand all implications of international contracts, English is the language of choice. EU has made a guide for European English, which defines a terminology that may not always match that of any English-speaking country, and many terminologies are translated from English-language originals. English has become the new Latin.

Google Translate tries to solve some of this. However, when I read Chinese web pages in Danish, using Google Translate, it is obvious that it was translated to English before it was translated to Danish. There can be many reasons for this, but it surely helps comprehensibility when I use Chinese->English instead of Chinese->Danish. Anyway, Google Translate cannot solve all problems, it's merely a patch.

Only 1-2 decades ago, you would have looked at the size of countries, measured by population count and economic size, in order to find out what language to learn. Today, English is much larger than the sum of English-speaking countries.

The latest statistics indicate, that other languages than English are currently losing popularity in school in Denmark. That's a problem: Most people in the world don't do English well. If you want to target those people, you need to localize. Even when you meet a person that seems to talk and understand English well, you need to realize, that this sometimes requires the full brainpower of that person. In other words, if you ask this person to solve a complicated task, that involves the use of English language, like programming HD recorder, it is much harder than if the HD recorder had been localized. Also, just because a person knows how to express himself/herself in English in a given context, it doesn't necessarily mean that this person can express himself/herself in another context that would work out fine in his/her native language. In order to localize well, an application specialist should know the target language well enough to be able to inspect the localized result.

So, remember to localize, learn languages, and remember to teach your children English. And in the unlikely case that your native language is English, here is a sign not to laugh at, it's very serious:

If you're in doubt about what it means, use Google Translate.