Archive

Archive for September, 2004

Movitz – a Lisp OS platform

September 27, 2004 Leave a comment

Wow. You hear alot of rumours about Lisp-based operating systems but you rarely come across a project that appears active. Movitz looks like a promising *active* LispOS project! It’s goal is to provide a development platform to run CL-based kernels on x86 PCs “on the metal”. I guess these restrictions make it doable. I’d prefer to see CL or Scheme combined with Linux or FreeBSD kernel so that you could still have a viable usable system and also experiement with OS ideas using lisp. In this respect, Schemix looks interesting. It’s a Scheme interpreter (based on TinyScheme) patched into the Linux kernel. I haven’t looked at it closely but imagine that it’s interpreted nature means that it is probably only useful for early prototyping of ideas and certainly for exploring and learning about the kernel.

WCL and shared libraries

September 27, 2004 Leave a comment

Just read about WCL, a Common Lisp (CL) implementation I wasn’t aware of. The paper talks about how CL is compiled to C and linked into a shared library. This allows a memory efficient delivery environment. i.e. CL application share code via shared libraries including the core system/libraries. I missed whether the CL compiler is available at runtime which would be a drawback. Many problems were solved but still afew remained such as relocated data with embedded pointers in the shared library (causing slower startup times), generational GC is not implemented, the compiler could be more sophisticated and there is no thread support. The project appears to be stalled.

I wonder whether other CL implementations such as GCL and ECL using the CL->C method are able to provide sharing through shared libraries?

Java programs have similar problems that WCL attempts to solve for CL. When the same Java program is loaded by separate JVMs (in different processes), they don’t share any code. i.e. the classes will be jitted multiple times …being stored in memory multiple times. I believe that this problem occurs even within the *same* JVM when the class files are loaded via different class loaders! I’ve yet to confirm that though. This is one of the reasons why Java application take up alot of memory. Seems to be a problem with 1.4.2 anyways. Perhaps 1.5 fixes this problem. Microsoft’s CLR deals with this situation by using the assembly is the unit of deployment and having each assembly contain a version number. I imagine that with the CLR (and other CLI implementations), that each assembly is only compiled once within the same process. This doesn’t however solve the “multiple process runnings CLRs” problem though I believe there is an ahead-of-time option which stores compiled assemblies in an on-disk cache – this would would likely solve the problem (as long as the compiled assemblies are loaded into shared memory like a shared library).

Categories: Programming Tags: , , , , ,

CPU Scheduling

September 15, 2004 Leave a comment

I’m slowing working my way through the cs162 lectures from Berkeley. Just watched the CPU Scheduling lecture. I had the realisation that CPU scheduling is “just” resource sharing (der), like shoppers sharing the checkout operators on their way out of the store. Alan talked about optimal scheduling algorithms for “perfect” situation. I think it was FIFO was provably optimal when considering time to completion and SRTF (shortest running task first?) was provably optimal when considering average response time. FIFO works when all jobs are the same length, SRFT is more general but you have to guess about how long a job will run. I had the thought that if there are optimal schedulers for certain situations then wouldn’t it be great to be able to specify what scheduling algorithm to use for certain processes? You could even allow user-designed schedulers to which you assign your jobs. Then you could have a heirarchy of schedulers and a super-scheduler that moves jobs/tasks between the different schedulers…. or you could use the lottery algorithm that Alan talked about towards the end of the lecture. It has nice properties like avoiding starvation (in particular of long running tasks) and easy to understand fairness. Anyways I still think it would be great to have a Lisp-based operating system, perhaps as a layer above Linux (cause you don’t want to write all those device drivers do you) that would allow experimentation with custom scheduling algorithms and heirarchies of schedulers.

The other potentially interesting thing about a LispOS is that you could do away with reserving stack space for each thread. One of the problems with having lots of processes (and threads) is that it consumes alot of memory even if each thread only gets afew kilobytes it mounts up quickly. With Lisp you needn’t use the stack to hold “procedure activations”, you could put them in the garbage collected heap (actually you might want to optimise that a bit and put them in a special “stack” heap and only move/link them into the gc-heap if necessary). This way threads only take as much “stack” as they need. Also “stack space” grows as required (just as the heap grows as required). Potential problems would be efficiency of the gc-heap – particularly memory allocation speed – and slightly larger activation frames due to embedded pointers.

Categories: Programming Tags: ,

Misunderstandings about closures

September 15, 2004 Leave a comment

Even the “big names you know” in the Java software development community can make mistakes when it comes to closures.

Gavin King thinks that closures wouldn’t work in Java because of checked exceptions. However, since the use of checked exception is optional you can use “closures” (annonmous inner classes) pretty well if you either don’t use checked exception or alternatively wrap checked exceptions with unchecked exception (like the JDBC template in Spring). I was wondering if it would be possible to have you cake and eat it too on this point with Java 1.5 generics – another poster says it is possible to parameterise on the checked exceptions. I hope that’s true because that’s the best of both worlds – “closures” and checked exceptions – or at least living life without wrapping every last damn checked exception in an unchecked one ;-).

James Strachan commented that closures have made it into C# 2.0. Another poster correctly pointed out that this is just *not* the case. I can understand why you’d thnk that – you really have to read between the lines in those Microsoft articles ;-). With all the good .NET stuff coming out of Microsoft lately I’m surprised they didn’t correct this properly in C# 2.0. I much prefer their implementation of generics – it’s not based on type erasure like in the Java.

Martin Fowler (if you follow the link through to his article) is right on the money. Closures are good. Lisp is good. Ruby is good. Smalltalk is good. You gotta love Martin.