The car is running a stripped-down Linux based on Gentoo and Busybox, with a minimal and heavily customized collection of system services. Most of the custom software is now written in Perl, which has been a huge boost to my productivity. The graphics are OpenGL on Xorg, running without a window manager. They are all running comfortably on a FitPC-3 at 1Ghz with a 6.4 watt processor.


When I first started the project I was writing the graphics in C++, which seemed (and probably is) the logical choice. However, brainstorming ideas in C++ is somewhat tedious, and it was using up a lot of my somewhat-scarce free time. I had binary protocols to pass the microcontroller data to the front-end, and every time I made changes it required me to recompile the thing before I could see the changes. Also, resource tracking and caching can be a pain with C++ and I wanted a quicker way to be able to add/remove graphics and fonts as needed.

At work, I use Perl to write web-apps. It is a great language to quickly toss around irregular data structures, and it has a wealth of modules to draw from, many of which are optimized underneath with C code. So, to bring the development speed of my personal projects up to what I enjoy regularly at work, I decided to scrap everything and rewrite it all in Perl.

The decision worked out quite well. I now pass all my data over sockets as JSON, I have a "resource manager" which can cache things like fonts and images, and my API is so flexible and powerful that I can add new animated widgets to the display in under an hour. That's still not quite as fast as authoring it in Flash, but it's a lot better than when I had to define a bunch of C++ objects and pre-register textures/fonts in the startup code.

I have a main loop which runs through the list of widgets, repainting each into the back buffer. I make a blocking call to glxSwapBuffers, causing the main loop to run in sync with the refresh of the monitor (60Hz) as long as I can keep each iteration under 15ms. Each widget is defined as a perl module, and has an 'update()' method which it uses to update its animations, and a render() method which makes OpenGL calls. To reduce the number of OpenGL calls I use OpenGL display lists, so in many cases the render() method is just setting a color and calling a list. This makes the perl version almost equivalent to the speed of a C++ version.

For my data feeds from the microcontroller and GPS I simply set the sockets to non-blocking mode and then read all available messages on each iteration of the main loop. It's not as fancy as event-driven programming, but it has the advantage that if multiple messages are waiting, I only perform my processing on the most recent one.

My C++ display program was using about 15% CPU, and my Perl display uses 30%. So, perl is indeed slower, but not by enough to matter, since I get 60 FPS either way. If I need more performance later I can also move more of the Perl rendering code to C modules linked into the Perl program, while still retaining the overall flexibility of having it in a Perl script. Perl actually makes this really easy with the Inline::C module. I'm already using this module to integrate the FTGL library (C++), which is one of the best ways to get font support in OpenGL.

Application State

(coming soon)


Abusing gentoo

While "make it run Linux" is an obvious choice, picking a distro is harder. One of the key concerns for the car computer is that it could power off at any moment, and I need really high reliability without direct console access. (Yes the instrument cluster can act as a console, but it bounces off a mirror and doesn't show the full height of the screen, so it is really annoying to attempt it) I also wanted the fastest possible boot time, and low RAM usage by system services.

I have quite a bit of experience with embedded type things at work, so I decided to roll my own. I started with Gentoo. The one great feature of Gentoo is that you can configure your packages to the full degree that the author of each package allows. For instance, I got rid of PAM, DBus, GTK, and a dozen other features that most distros would give you by default. I built it inside a chroot, so that I could test out the environment I was building, but for the finished product I run a script that collects only the packages that actually need to be on the car computer. My script also generates the required symlinks and special directories I want, and also writes a completely fresh set of config files. In other words, I'm only using Gentoo for the purpose of building a set of binaries, and then I make my own system using the Gentoo chroot kind of like a package repository.


While a widely-compatible kernel is a good idea for hardware independance, the way to get the fastest boot time is to eliminate all the modules you don't need. So, I went through the tedium of listing out every module used by my FitPC and then building a kernel with only these modules loaded. I also made a list of all the firmware blobs I was using so that I could compile them as part of the kernel, and get faster graphics device initialization.

My end result is a kernel that starts running my init script at 2 seconds of uptime, which is about as good as I'm going to get there.


As an embedded system, I want everything to be read-only so that a sudden power loss doesn't corrupt any of the important system files. I set up my drive with a "rescue" partition (busybox, and a few other tools which could help recover the system), a "system" partition that is ext4 but always mounted read-only, and a "data" partition formatted with nilfs2 and written as seldom as possible.

I was originally going to have the system partition be formatted as squashfs, but for development it is much faster to be able to rsync the new image onto the existing filesystem than to completely overwrite the partition. Also, it lets me update the system with a single reboot, where overwriting the system partition would require a reboot into rescue mode first. (I actually have a boot option to overwrite the system while running from the rescue partiton, but now I just use rsync for the convenience)

In the future, I would like to set it up so that there are two system partitons each with a squashfs, and then the bootup script would use whichever one had the newest timestamp. For updates I would just overwrite the one that wasn't mounted.

For the data partition, nilfs2 seemed like a great idea. It only ever writes new blocks when making changes to files or directories, and then reclaims unused blocks asynchronously. However, nilfs2 is not *fully* production-ready, though it is close enough for this use case (so far). But the other problem is that reclaiming unused space requires a userspace daemon, and it can sometimes be rather disk intensive as it runs. There are some new features available for nilfs2's garbage collector where you can configure it to make much more conservative use of the disk while reclaiming space, but I don't have those versions of the kernel module and tools yet.

Startup Scripts

I wanted the fastest possible startup for the car (since it cold boots every time I get in) so I ruled out any sort of pre-packaged init scripts. Instead I just made a single script that gets the filesystems mounted as fast as possible, configures a few basic other things in the tmpfs (/run) and then execs into the service supervisor (daemonproxy).

Here's my init script, for anyone wanting to create a minimal startup sequence for an embedded system: + rc.startup

My /sbin/init is a symlink to this script.

Service Monitoring

Classic PID files with init scripts are a really horrible way to manage background jobs in a Unix system. A much better approach, more in tune with the design of Unix, is to have one parent process which starts the daemons as child processes, and then it gets notified if they exit for any reason. It also gets to capture the reason the daemon exits (exit code, or signal number) which would be lost using the typical detached daemon design. Also, it can cleanly connect a daemon's stdout/stderr to a logger that is guaranteed to catch everything the daemon writes. If you have a backgrounded daemon writing to syslog you'll never see the dump printed by libc when the malloc heap is corrupted.

For a little background on service supervisors, see Why another supervision suite?. While there are lots of nice minimal supervisors to choose from, I wrote my own called (daemonproxy), and so naturally I'm using it. As soon as my init script passes control to daemonproxy, it immediately launches all the daemons, with no dependency checking. The individual daemon runscripts get to decide whether they want to wait for some resource to become available.

Here's my daemonproxy config file: + init.conf

and a few example run scripts: + rc.xorg , + rc.xorg-log , + rc.dash

Ironically I'm not using most of daemonproxy's special features here, but I am using the feature where I can direct multiple related programs into the same logger, which is something I don't believe any other supervision tool allows.


As you can see in my daemonproxy config above, I'm taking a very simple approach to logging. I just have each daemon write to stdout/stderr, and then I pipe them to a self-rotating logger program which I have writing files to tmpfs. So basically just logging to RAM. For many of the scripts which generate little or no logging output, I just direct them into the kernel's ring buffer (/dev/kmsg). Why run a logger to record messages to a file in RAM when the kernel already does this?

So far I haven't run into a program that requires syslog, though of course my list of services is pretty small. So, I don't run a syslog daemon at all.

Eventually I plan to write a log scraper, which looks for any unexpected messages and writes them to disk (at long intervals, to prevent writing the disk too often) so I can review them later. But I haven't needed it much so far...


Linux hotplug is a topic of its own... when I was new to it it seemed pretty arcane and bizarre, but now it seems pretty simple. I should probably write a full article on this, but for the scope of this project I'll just say:

  • Hotplug is where the kernel spawns a utility program to handle some runtime event related to hardware.
  • Various Linux distros have made a complicated web of scripts to dispatch these events, but all you really need is "mdev" from busybox.
  • If you symlink /sbin/hotplug to mdev, you're done. Or, you can customize the path in /proc/sys/kernel/hotplug...
  • The hotplug system handles hardware events, module events, and firmware events. If you customize it with a script, don't forget to handle each of these.
  • Some module events tell you about hardware IDs which you need to pass to modprobe to load the appropriate driver.
  • Some module events tell you a driver has just loaded and you can create new device nodes in /dev to talk to that driver.
  • The firmware loading messages tell you that a module needs a firmware blob in order to continue loading. The handler must go find the appropriate file and copy it to a filehandle the kernel gives you.

After investigating mdev (which has really weak documentation mainly consisting of one really big example) I figured out that I could get it to take care of my minimal needs with just a few small entries in the config file:

$MODALIAS=.* 0:0 660 @modprobe "$MODALIAS"
$SUBSYSTEM=net	0:0	660	*/lib/hotplug/net
card[0-9]*	0:0	660	>dri/
ttyACM.*	0:0	660	@/lib/hotplug/uctl
ttyUSB.*	0:gps	660	*/lib/hotplug/gps
The first line handles the detection of new hardware by passing to modprobe. The second line calls my custom script + hotplug-net any time anything in the 'net' subsystem happens. The third line moves /dev/card* into /dev/dri/card* where Xorg expects to see them. The fourth and fifth line are lame detection rules for the serial devices created by the microcontroller and gps. mdev doesn't provide enough metadata to recognize the devices directly, so I just use the device name and pass it off to my own scripts where I can do deeper detection logic if needed: + hotplug-uctl + hotplug-gps


The microcontroller I'm using is the ATMEGA AVR at90usb646. This is the chip on the original "Teensy++ 1.0" board from PJRC. There are some better options out there now (especially the much more capable ARM chips on the Teensy 3) but this is the one I invested the time to learn and so I've been continuing with it for now.

The program is written in low-level C, compiled on Linux with "avr-gcc", and pushed to the board with the Teensy desktop app (provided by PJRC) over the same USB port. The USB port serves dual purposes for both programming the chip and connecting the chip as a USB peripherial, which is nice because you just connect a single wire and can then quickly test iterations of the software without needing to plug/unplug anything.

I used the example code on the PJRC site to implement a USB Serial device (originally USB HID) and gave it some custom USB IDs so that I can detect it with udev rules. It sends a stream of ascii messages describing the current values of its variables. I originally sent binary records, but the time it takes to write tools to work with those records was eating too much of my limited development time. The new text message protocol has been a breeze to work with.

The device is powered externally, so it can sit and watch for a host on the USB bus. Since it is configured as a USB Serial device, it also knows when a program on the host has opened the serial device node and when the program disconnects. This lets me dump all variables when the client first opens the device, and then send only the changes after that. If the client disconnects and reconnects, I re-dump the variables.

The program is written using the clock and the scheduler described on my AVR software page. The main loop sets up the periodic events and then just runs the scheduler loop. Several other events get scheduled by interrupt handlers. By having the interrupt handler queue a job and wake up the main loop, it lets the code run in non-interrupt context. If an interrupt fires again (or multiple times) while the job is still running it will just leave the job queued to start again as soon as it finishes. This turns out a lot better than missing a run of the job because it ran a little too long, or playing games with the interrupt-enable bits to let interrupts pre-empt eachother.

I run a recurring task every 5ms which checks all variables vs. their previous value, and generates USB Serial packets to tell the client of any that changed. By making this task run independent of the actual data collection it ensures that the client is getting regular data updates regardless of the delay of the analog-digital-converter or the interrupts from the tach or wheel sensor, and avoids saturating the USB connection.

Next, every time a tach pulse comes in (indicating a cylinder just fired) I record the time stamp, and then re-calculate the average interval of the tach pulses, giving me the RPM. Likewise with the interrupts from the wheel sensor to calculate speed.

The chip can only measure one analog channel at a time, so I also have a task that records the result and switches analog channels each time a measurement completes. (the analog circuit raises an interrupt each time it finishes a measurement) This is the task that updates the voltage, oil pressure, fuel level, and temperature variables.

Plans for the near future are the ability for the host to ask the microcontroller to shut off power to things (including the host itself), and for the microcontroller to re-power the host when it sees someone open the door.


I didn't need to do much here. The gpsd project does an amazing job of auto-detecting hardware, and the protocol is a simple stream of JSON. I mounted a USB unit on the rear louver of the car, ran the wire to the computer, and have daemonproxy run gpsd, and then connect to the gpsd socket with the dash rendering program. However, I did decide to link it to the hotplug system rather than hard-code the device name, in case the usb initialization came up in the wrong order. You can see the line above in the hotplug scripts that tells gpsd each time a gps is plugged or unplugged.


(coming soon)

Voice Synthesis

(coming soon)

Voice Recognition

(coming soon)