Looking back on this video with Anders Hejlsberg, about the future of C#, from march 2009, or this blog post about the upcoming bloat explosion, seems quite awkward with the current explosion of iOS and especially Android. There are numerous awkward parts in the video: The focus on objects, the "huge amounts" of memory and the statement that multithreading is the exception.
As readers of this blog know, Google has a famous quote in their designing for performance article about programming Java for Android: Avoid creating objects. Also, on mobile platforms, there is not plenty of memory, in the contrary, and multithreading is a vital necessity in order to achieve a reasonable responsiveness. Energy efficiency has become very important, and Apple even introduced a ban on runtimes. Also, Python and PHP is on the rise, it actually makes sense to raise venture capital for PHP-based projects these days.
It seems that Microsoft and Anders have simply planned for the wrong future - assuming that the trend would continue, and that complexity, bloat and computer sizes would continue to rise. However, the world has gone low-energy, low-complexity and small computers, instead.
I wonder how such a video would look if Anders was interviewed today.
18 comments:
Hardware is getting more and more performant, and most mobile phones (at least the ones) with Android and also the iPhone 4) which came to the Market during the last 6 month have at least 256MB of ram.
Todays smartphones are as performant as PCs less than 10 years ago.
Next year will certainly see the high-end phones move to 1GB of Ram, with dual-core cpus.
So the fact that you can use runtime-based high-level object oriented programming languages comes with less and less penalty.
And it makes developers life more pleasant.
Just to laugh, here a string trim in Objective-C vs Java:
Objective-C:
NSString *trimmedString = [dirtyString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
Java:
String trimmedString = dirtyString.trim();
How could Apple make people swallow that?
And btw, when I compare the launch-speed and run-speed of apps on an iphone4 and my htc desire (native code vs runtime-based java) I see no difference and even often feel the android-counterpart starts fasted (maybe because android is smarted in keepding the app warm and frozen in ram vs relaunching it every time)
So the
On mobile phones, the battery is the limit. If the phone cannot hold a charge for one day, people will buy something else. Adding an extra CPU core will not allow more bloatness, it just makes things a bit smoother. With regard to start times: Your observation is correct, that Android's fast start times are usually achieved because apps are already started. Using a class like Locale is enough to slow down the initial app start.
I think you've chosen the wrong programming language if you have to ask developers to avoid creating short-lived objects ;-)
I always thought .Net at large to be a replacement for Java and Visual Basic (or Delphi on that respect): Enterprise, rich or thin client development. Was never made for doing (too) low level stuff and for certain will be always specifics that will make it too slow for some situations.
Also Andrew made his states to promote future C# 4.0 language version, so his states are fairly honest and true: C# wants to compete with python at places.
At the end, CLR and any heavy-weight JIT are not ready for phones because computes too much time. In general I think almost no JIT is powerful enough to give both good code quality and good startup time. So for this very reason C++ is an option for Android.
.NET was never envisaged as a platform for mobile devices - it was created to displace Java in server farms (".NET" - BIG clue, right there in the name!). It was also the eventual delivery of COM+ (parts of the .NET runtime are still named for their COM+ origins).
COM+ as finally delivered was a shadow of what was promised, much of which only really came to fruition in .NET
Java on the other hand was always intended for use in small devices... it was born out of a project to create a language and runtime for digital TV set-top boxes. Java found it's way into server farms and on desktop PC's by accident - but small, resource constrained devices was home.
The biggest mistake this industry makes (and persists in making) is to chase the Holy Grail of one system, one runtime and one platform for all purposes.
On this I disagree with Heinlein - specialisation isn't JUST for insects, it's also for computers.
@Jolyon: actually, .NET Compact Framework version 1 was created to directly compete with the Java Mobile Edition at that time.
In fact, they trimmed down the .NET CF so much to be smaller than the Java ME framework (and crippled it so much that it caused the thrive of http://www.opennetcf.com/).
--jeroen
There are two basic problems with the Java runtime on Android:
1) Many standard Java algorithms produce so much garbage, that the Garbage Collector is running continuously during high load unless you stop creating objects.
2) Everything is serialized and the SD card is slow, so accessing data by indexing isn't easy.
That's why Delphi (or FPC) could be still in course. Its memory model is much less consuming than Java/DotNet, and if FastMM4 has problems about multi-core scaling, we could change it - see e.g. http://blog.synopse.info/tag/SynScaleMM or others. You can even use stack-allocated structures, using record/object instead of class. I've seen FPC applications using so few memory, even with ARM CPUs. See for instance http://shootout.alioth.debian.org/u64/pascal.php
The issue with Java on Android is IME that it's classes are too fragmented, f.i. on the SD card serialization issue, you can get decent performance, but you need to think to wrap your streams in buffered streams.
There are many Java classes that are not "ready-to-use" on their own, and that need to be coupled together to be effective, and there are also many specialized class variants for each task.
They will all "work" for a given task, but only a choice combination will work efficiently.
Java really lacks IME a set of swiss-army-knife classes, that would not offer the very top performance all the time, but that would guarantee good performance all the time, instead of behaving as performance booby traps most of the time.
@Arnaud
Performance of application is not about necessarily how it scale it's memory manager, but sometimes a thing may have scaling performances worse than others. Like: use StringBuilder (or StringBuffer) instead plain strings. And don't use in C++ almost never + operator on std::string.
.Net as Mono implementation can support performance profile for phones (both Mono and iOS) because most of performance gain is in pre-compilation, not how well (or how bad) GC scales. Also, .Net supports stack allocation, even in a clean design way (using struct keyword), so no need to state absolute things. If you want to set the point that most developers still will use reference based class instead struct, I can say that a lot of Delphi developers use Tclass things also and may lead to bad performance profiles. Anyway, a person which will use a profiler will find the bottlenecks and will use a stack based implementation regardless is in VM (like in Mono) or in Obj-C.
@jeroen - .NET CompactFramework was created *long* after .NET itself was originally created - as I said: .NET == **COM +**.
The fact that after they had created this thing for servers they then decided to try to shoe-horn it into mobile devices is a separate issue.
And the fact that they essentially created something entirely new that just happened to have the same name and some things in common, reveals that it was not designed for what they subsequently tried to do with it.
It would be like releasing a product called "Delphi for PHP" which did not use the Delphi language OR the Delphi IDE, or the VCL, but which just happened to have an editor that looked a bit like Delphi and a runtime library that smelled a bit VCL-like.
@Joloyon, before posting such strong opinions stated as facts, please get your history right.
The .NET CompactFramework version 1.0 was released in 2002, the same year that the regular .NET Framework version 1.0 was released.
A bit more background on why I think that - like Java - .NET is important in a wide range of environments:
The .NET CF worked out a lot better for me than you tried to describe, especially when compared to the traditional ways (C, C++ with cross compilers and limited debugging capabilities) to build mobile apps back then.
Nowadays, you can even get tinier using the .NET MicroFramework. Of course that is a trade-off (less space means less resource usage, but also less framework to carry around).
The .NET MicroFramework is the only .NET platform that can run on an RTOS obtaining real-time behaviour. That is something unique when compared to the regular .NET world.
I've been outside of the Java world for almost a decade now, but I can't imagine Java having a MicroFramework counterpart (I remember they were working on a smartcard version of Java).
I think that back in 1984 when "The Network is The Computer" got coined, Sun was way to early with that. They were right, but way to early for the general public to understand that message.
Microsoft had a much better timing when naming .NET and C# spot on both in names and timing.
A lot of people I met in the industry started to loose their faith in both Java (becoming too academic, perceived slowness of JIT and GC) and C/C++ (being too much shooting yourself in the foot).
Microsoft very carefully looked at the mistakes made in both of these worlds (and of course in the Delphi world too) and came up with something very new for their clientele, and in the mean time drawing a lot of attention from the C, C++, Delphi and Java world as well.
Given the range of things you can do with .NET technology, I do think they have well planned the .NET path from the beginning.
Of course, when utilizing this in your projects, you need to be very much aware on where you run. In a set-top-box, you cannot do the same stuff as on a server farm. But with .NET technology, you can go both ways if you want to, and reuse lots of your knowledge.
In that respect, both Java and .NET address similar worlds, and I'm pretty sure that was what the .NET team at Microsoft was after when they started this journey back in 1996 when developing Visual J++.
Basically, the current realm of mobile devices is for developers that write bloat-free software.
But with the increased performance of low-voltage multi-core processors, and the slow increase of battery capacity, future devices can do more stuff. So you will see bloat there too.
With respect to COM and COM+: I never felt comfortable with the unmanaged origins of those. In fact the whole dynamic thing in C# 4 feels as uncomfortable as variants in Delphi to me.
Luckily, you don't have to go that way when doing .NET.
--jeroen
The main energy consumer for most advanced Android users, is the screen. So, you may argue that there is still room for using more energy, but many still want smaller phones with long battery life and a physical keyboard, similar to Nokia E52, just with Android. Such phones will have most energy usage in the phone, so energy efficiency will need to improve, not degrade. It isn't everybody that accept to charge daily and walk around with a device that has a large screen but cannot be operated without looking at it.
Therefore, we will see more asynchronous electronics, more demand-based software execution, more centralizing of event handling (like the Android push technology) etc. The traditional threading and garbage systems in .net and Java are not designed for it, so we will see a lot of compromises, until one day, somebody invents yet another programming language, which does it really well.
Mark's comment was deleted because it contained an attack on a person and did not discuss the topic.
@Jeroen - sure .NET CF was released at/around the same time as .NET.
But the *original* .NET started life as COM+... along the way they chose to try to shoe-horn it into mobile devices, something for which COM+ - the progenitor of .NET - *was not /originally/ designed for*.
And characterising .NET as having trodden a well planned path is hilarious in the light of the repeated about-faces and changes in direction that have characterised the evolution of the platform.
@Jolyon: we are drifting off topic here, but I give it one more try:
COM+ (previously MTS) is unmanaged, COM based and has mtxex.dll at its kernel.
.NET is managed, does not require mtxex.dll, and (if you want it to) can wrap around COM+, but can completely exist without COM+.
Sure: with .NET you can do similar things as COM+.
But .NET started out as a much broader platform aimed at solving problems that other platforms and big PC-languages at the time (Java, C/C++, Delphi, VB6, classic ASP, etc) were having.
So it is way to far fetched that .NET originates in COM+.
In fact, the origins of .NET were laid when Anders Hejlsberg was working on Visual J++ and WFC (Windows Foundation Classes). Microsoft specifically hired him, Peter Sollich and a few other specialist because of their deep knowledge in compiler and run-time areas.
The centre of .NET are the CLR, IL, and core framework: they allow the .NET languages to run and interop far better than the original designs of MTS ever could have done.
Go ask the people involved back then; the last time I spoke to them, COM+ was not the centre around which .NET development started.
--jeroen
All technologies borrow from each other, have a starting point, and leave their starting point. Delphi started out on a Nascom computer, without an editor, as a fast compiler without objects. It has had many different libraries. However, if you want to look at the potential for a technology, you need to look at dependencies. ObjectPascal is similar to C++, which can be made to run on other CPU architectures, and can be adapted to any configuration of RAM sharing and CPU count, and which can adapt any kind of memory allocation logic.
In telecommunications, it is a well known fact, that small pieces of information should be bundled into larger pieces in order to increase utilization, i.e. efficiency. The same applies to most PC hardware. It simply kills performance if you allocate memory in small chunks. Memory chunks of 1kbyte or 1MByte are much more efficient than some tiny object of only 100 bytes. Having a pointer to an Integer object is the ultimate insanity, it is like sending 10-byte chunks on a 1Gbps ethernet connection, and we know how that destroys performance. A scalable high-performance programming language does not use lots of small objects on fast computers. It's that simple.
Post a Comment