Carmack on Java, and interesting instructions

John Carmack has updated his blog again. Lots of cool stuff for geeks in there, and once you decipher what he's doing he writes quite readably. This time he has a long hard look at Java phones. His main beef is that Java is not fast enough for games. Most of this performance loss is because of the Java sandbox. He seems to have had fun taking the whole thing to bits in a way that few professional programmers do these days - normally you have to find a PhD student's paper to get an inside look at whats going on.

So Carmack's beef is around loss of control over the hardware, and not being able to stretch it to its limits coupled with the lack of useful portability that stretching it creates. As a teenager I loved writing games - or more specifically the start of games, never too good at finishing them... - so feel a natural bond with Carmack's problems.

In the real world, security in phones is pretty vital, and now I know that I use a phone for calling people, a PDA for diaries, a camera for pictures and a Gameboy for games life is much simpler. Each attempt at trying to combine these items seems to end up with one being shoddy - or even worse banned. Mobile stuff seems far from the prime time.

However Java is something I care a little about. Sun have done a good job in designing a robust language. Imitation is the sincerest form of flattery, and its obvious MS feel the same way. Good performance of such an easy to use language is a must. I was actually quite impressed by the raw Java performance of my 3GHz PC, but I guess it should be quick ;).

Carmack highlights array checking as a constant performance drain in Java. I guess Java machines have to do something like:

load offset
check offset lower
jump if lower error
check offset higher
jump if higher error
read/write the value.

I was trying to picture if an optimizing compiler could take anything out of that, and I guess a really good dynamic translator could spot the accesses that were going to be within range, but it all sounds quite hairy.

It seems to me that this is a fundamental part of a modern language, so cannot simply be removed for speed. So what could be done.

One option I see is an extension to the NX bit concept. Microprocessors have had segment faults for ages, so why not add a simple instruction to the chip that allows an array to be dynamically declared as such. I realize PC processors already have mechanisms for doing this but they were, from what I can see, designed mainly for use by the OS to stop people's programs splatting each other, and not for such fine granularity.

My imaginary instruction is called define array, and could take 3 parameters, array start, an end, and an index. It is not the sort of instruction an assembly programmer would use, but for almost all high level languages it could mean an enormous speed up. Imagine a function with a nonlinear array access - there are obvious optimizations for linear access - like a sort routine

Old school:

Load array start
begin
Bounds check (4 instructions)
access value
loop

New style:

Declare array
begin
access
loop
Free array

That's a lot of instructions for a relatively simple instruction.

Now combine the instruction with strings, and buffers in C. All of a sudden even the lower level languages are benefitting from full boundary checking. Suddenly IE, IIS etc might not look as vulnerable as they were before. The NX bit starts to look like a little bit of a missed opportunity...

I must say now I haven't looked at assembler for years, so please correct me if such an instruction is already common place. Adding huge numbers of SIMD instructions is more sexy, but just maybe there's still an instruction which would benefit the average user, and make the poor C developer's life a little more straightforward...

Comments

Popular Posts