Arkanis Development

Styles

Projects

My plans for world domination.

This page lists some of my more interesting projects. I’m programming for several decades now and a lot of stuff accumulated while doing so. I usually try to write about the major lessons or insights of a project. I’m not proud of every project listed here, even if a lot of time went into it. I still think it’s worth sharing the experience but you’ll have to forgive occasional sarcasm or rants.

Take a look around. If you have questions about a project or just want to write me a few lines write a comment on the projects blog post or send me a mail (it's on the profile page).

sdt_dead_reckoning.h library to create signed distance fields

Status: maintained Tags: c, graphics, sdf

Another small single header file library. This time to create a signed distance field out of a black and white image.

It's an implementation of the excellent paper "The dead reckoning signed distance transform" by George J. Grevera. See the blog post for more details.

iir_gauss_blur.h library and cli tool to blur images

Status: maintained Tags: c, graphics, effects

A small single header file library and command line utility to apply an Gaussian blur to images.

I've implemented the algorithm described in the paper "Recursive implementation of the Gaussian filter" by Ian T. Young and Lucas J. van Vliet a few years ago. In this project I just took the old code, cleaned it up and published it as a single header file library. The CLI tool was more or less a by-product of the investigation into the sigma parameter of the blur (see blog post).

js2plot tool and library to plot math code written in JavaScript

Status: maintained Tags: javascript, math

A small tool with the purpose of plotting mathematical functions. But in contrast to most other tools the mathematical functions are written directly in JavaScript. I wrote it for my own needs.

Drawing the function plots wasn’t difficult. But drawing the grid and axes numbers as well as implementing the zooming consumed more time than I expected. Straight zooming is easy but zooming into the position of the mouse cursor can be confusing to implement. I’ve done it a few times before but it somehow confuses me every time. So much so that I’ve finally written it down so I can look the math up in future. Maybe I’ll write a blog post about that since I haven’t found a good article about it yet.

An unexpected complication came from the CSS side of things. The design is very basic but the textbox with the JavaScript code should define the width of the sidebar. When the user resizes it the width of the sidebar should change accordingly. Sounds easy, no? Trouble is that all the other block level elements in the sidebar also try to extend the width of the sidebar to fit their content in there. So how to tell them to use the available space but not to extend it? I hopped that the stuff around max-content and min-content would provide a nice way to do this but I didn’t find any. In the end the (very) old display: table-caption; trick did the job since table captions take the available space but don’t extend it. Unfortunately this approach required some ugly HTML wrapper elements. :(

Initially another target was to support panning and zooming on mobile devices (especially tablets). But there still doesn’t seem to be a browser event to handle native pinch-zoom gestures in any way. Instead you have to reimplement the gesture by yourself. Right now I don’t need it so no mobile gestures for now. Maybe in the future.

Arkanis Development v4 personal homepage

Status: finished Tags: arkanis-development, webdesign, simplicity, php, html, css, javascript

A long due (kinda) overhaul of my website. I’ve had a design prototype lying around since 2012 (Sribbles) but I never had the time to implement it. So this time I didn’t want to do anything “new” but finally implement stuff that accumulated over the years. I rewrote large parts of the PHP-based backend, wrote new versions of two designs (each with a mobile variant) and a style switcher to switch between them. CSS evolved a bit since 2012 and the previous design had a few quirks I wanted to resolve (primarily display: table; stuff).

This was a good chance for a deeper dive into Flexbox. Unfortunately I was somewhat disappointed. At first it looked quite nice but there are a few rough edges that make many scenarios unfeasible. Absolute positioning within a Flexbox was surprisingly painful and you can only force “line breaks” when the container has a fixed size on its main axis. Only Firefox seems to implement manual line breaks within Flexbox. That was a major bummer since I wanted to use columns and reordering to handle the differences between mobile and desktop layouts. All in all I got the impression that some details of Flexbox are not really fleshed out yet… despite its age.

This prompted me to look into CSS grids and I was positively surprised. It seems like a well thought out layout model and the combination with position: absolute; is nice and simple. Unfortunately many of my old mobile devices didn’t support it so I had to revert back to Flexbox (and accepted some tradeoffs).

Surprisingly the most interesting technical aspect of the project turned out to be the CSS inline model. For the Scribbles design I wanted to align text lines with the lined paper shown in the background. Each line had to be 32px high or the text and background image wouldn’t align properly. After running into some trouble I started to figure out how CSS calculates the height of each line. Vincent De Oliveira wrote an excellent article about that and thanks to him I could cover most cases. Especially all the fine details of the Scribbles design were quite an interesting challenge. If you ever find yourself wondering why a smaller line-height can lead to taler lines read that article. ;)

The entire “technical” stuff took me about two weeks. About one week per design. Another large part of the project however was to actually write content (a history of my projects). It took me two more weeks to dig out a few of my more interesting projects and write a bit about them. I can really recommend that experience: When you try to get code running that’s almost 20 years old you start to appreciate certain qualities in code. Self-contained stuff for example. Or a good emulator like DOSBox. :)

Pictures

Operating systems lecture lecture and exercises at the Stuttgart Media University

Status: finished Tags: teaching

Not my average through-the-mill project. I already supported this lecture for several years while studying at the university (supervising exercises, helping out in the lectures, etc.). During that time the lecture was given to another professor and we sat down together and rebuilt it from scratch (but still based on classical operating system literature). Later on while I was working at the university I had to the chance to give the lecture myself. I took it and gave the lecture for 1½ years until the research project I was paid for was finished.

I rebuilt the lecture from scratch. Benjamin Binder, a good friend of mine, helped a lot with that and wrote most of the early exercises to cover the C basics for the lecture. We also changed the focus of the lecture from building kernels to understanding the hardware basics and using operating system APIs. This was due to a drastic change in audience over the last couple of years. When I started studying the average number of students in a lecture was about 20 to 30. Half of them proficient programmers with an apprenticeship and some work experience. Most of the other half was new to programming but understood the basics well and were catching up quickly. By the time I gave the lecture the average number of students was 70 to 90. Political decisions pushed a lot more people into higher education including many who would be much better suited for an apprenticeship. Over the years the university’s role gradually changed from building upon an apprenticeship towards replacing an apprenticeship. Combined with other unfortunate circumstances within the university this lead to an audience with only a hand full of proficient programmers (if that many), one half with a basic understanding and the other half with almost no understanding nor motivation for programming. You can imagine that this leads to some radically different group dynamics. The likelihood of having a proficient student within a student or learning group is quite low so you can’t rely on mutual support between students as much as before. This assessment is biased by my experience and perspective so take it with many grains of salt.

Traditionally operating system lectures cover topics to build operating systems (mainly kernels). This is interesting for proficient programmers but for beginners this quickly turns into “Why do I need to know that? Windows or Linux already do all that for me.” That’s if you’re lucky and a brave student actually voices that though. Many still believe that the university and lectures have a grand plan that makes sense (good but sad joke). So instead of teaching topics that are only interesting for very few students we decided to focus on topics that are relevant for all of them: The mechanisms and APIs of operating systems that make your life as a programmer easier.

Over the years I observed, advises and supervised many student projects. Of course you occasionally get advanced projects but they don’t need help with what they’re doing. In contrary, those students usually know much more about their stuff than the university’s staff. I wanted to take the experience from the normal projects and create a lecture that helped them to avoid the problems those students where struggling with. Most problems fell into roughly 3 categories:

The aim of the lecture was to reduce the probability of those problems. We came up with a lecture and exercises covering those topics. Students would actually build a small network chat with TCP (Socket API), a small event loop (poll()), a GUI for the network chat (integrating file descriptors into the GUI event loop) and finally add background threads to a small image gallery to make it responsive and use multiple CPU cores.

At first it went pretty well. Students seemed to have fun and liked it. But with successive semesters I had to divert more and more time to the research project I was actually payed for. And that lack of time showed: Without time for proper preparation I couldn’t explain things quite as well and misjudged reactions more often (e.g. explained something once more while everyone was already bored by it). The mental stretch between teaching and writing robust production code for the research project was also an unexpected strain. At least when I had to focus almost exclusively on the research project it became increasingly difficult to do all those small changes in perspective to explain something from different angles for different learning types.

Still it was a rewarding experience. I’m curious if the plan actually worked out and those errors occurred less often. Unfortunately since I started to work for the university I was given almost no time to actually do something with the students. So I’ll probably never find out, at least not in a way where I can attribute it to the lecture with any certainty.

Pictures

Slim Hash single-file C library

Status: maintained Tags: c, hashtable

Another single-file library for C99. This time a set of macros to generate hash tables. That’s probably the single most feature I miss in the standard library. Initially I wanted to avoid the code duplication that comes with generating code. But a small footnote in the C spec put an end to that and after several prototypes I reverted to code generation macros.

Anyway, as with Math 3D I was fed up with complex libraries and tried to create a simple one. The performance is ok but not great. I don’t use it for highly performance critical stuff so I haven’t spent time optimizing it. Right now the library uses the murmur3 hash function but also contains the fnv1a hash function in case you want to use that (it’s simpler but was slower in my tests).

Key to Excellence web-based serious game

Status: finished Tags: webdesign, javascript, html, css, php, json

A two year research project together with the Swiss bank UBS. We were a team of 3, later 4. A game designer and project manager, a story writer that joined us later on, an artist and me to take care of the implementation. Our artist also coordinated several students that helped out with the character art. All in all a fun if sometimes stressful project. I especially enjoyed the very professional cooperation with our industry partner. Honestly, I rarely had such a constructive and positive atmosphere.

Technology wise the challenges were equally interesting. We wanted to remain compatible to Internet Explorer 10 and the game had to work on terminal servers (to cover that eventuality, not as the main platform). This pretty much established the APIs and technologies that were available to me and hardware accelerated graphics wasn’t part of those (no usable WebGL without GPU). This made some aspects easier, some more difficult. I could use all the UI functionality and resource management of browsers. Conversely creating a large isometric game world with canvas elements and 2D sprites required some serious trickery. Getting it fast enough made the trickery even more complicated (especially on mobile).

We established pretty early that we didn’t have the resources to build an entire engine and editor. Instead we wanted to focus on building the game. Yet it took us quite some time to figure out what kind of game actually made sense to achieve the goals of the project. So for a while I spent my time writing rather abstract code and an editor. Simply because we didn’t know yet what exactly was needed. You usually call that kind of code an engine and that was where the stressful part came from. ;)

I took some rather unconventional choices regarding the source code. Mobile devices simply didn’t have the performance to unravel layers upon layers of engine, framework and library code so I had to minimize that. Some measurements were right out shocking: Fire one event handler and it takes 1ms until your code actually runs. Drawing performance was roughly proportional to the number of pixels drawn (no surprise with CPU graphics) with the positive exception of compositing that was pretty fast on all platforms. I ended up with performance expectations and techniques mostly similar to an 40 MHz 486 running Windows 95. Buffer your bitmaps (canvases) and blit them together with fast compositing. That would be almost funny if it wouldn’t be so sad.

All this also led to a rather unusual source code structure. I often had small chunks of HTML, CSS and JavaScript code that implemented a specific component of the game. Yet I scattered the code all over the place. HTML and CSS code went into their respective files and the JavaScript code was structured into various functions and classes. You can imagine that this made the code hard to read and I frequently lost my train of thought while navigating the code base. I experimented a bit to improve the situation, including porting a small part of the code base to TypeScript. But this resulted in me patching the VisualStudio Code HTML mode simply to tell it to pick up the proper JavaScript files and settings for type checking the code. I gave up on that avenue because it caused way more problems than it solved for me.

I took a few steps back and thought about what would be useful to me and what I had to work with. Putting the HTML, CSS and JavaScript code of the same component on the same screen would make it easier to see how they relied on one another. That’s what I wanted. I also realized that almost every component existed only once (except e.g. game objects). So no need for classes or instance management. I only needed one character selection dialog, one map into the game world, one main menu, etc. So I started to group the code by components: I simply wrote the HTML code followed by a <style> tag with the relevant CSS rules and a <script> tag with the relevant event handlers. Surprisingly many of the components did fit on one screen again and I could see everything I needed when working on the component. Sometimes I spent hours or even days at the same piece of source code. This also allowed me to use Firefox’s developer tools to edit the rules in the <style> tags with the builtin text editor while having a nice robust realtime preview. This came in pretty handy for more complex dialogs or UI styles.

There was a lot more to the project than that. Quite some interesting two years. But this source code structure was my biggest personal surprise.

Pictures

Math 3D single-file C library

Status: maintained Tags: c, math

A small single-file library for basic vector and matrix math. I wrote it together with a friend and we wanted to make sure the math is correct and matches the conventions used by OpenGL. For that reason we spend quite some time on the whiteboard researching the algorithms and doing them by hand. Compared to copying it form somewhere and fiddling around until it works. Mathematical literature is prone to using different conventions all over the place, especially where indices are concerned. And having 4 eyes reading the algorithm makes catching those details much more likely. We also used pair programming when writing the code and tests. Without that it’s just to easy to loose your head in all those meaningless indices. We also spent some time to make the library easy to use and the code easy to read.

Despite having used math like that for many years this was the first time I really understood the more complex parts of it. Especially the matrix inversion and various projection matrices.

minidyndns one file DNS server

Status: maintained Tags: dns, dyndns, ruby

One of the projects where you just needed some excuse. In this case a fiend of mine wanted to access one of his computers from the internet but didn’t want to use 3rd party services like DynDNS. Long before that I wanted to write a small DNS server. In essence it’s just a hash-table lookup with some packet parsing and I wanted to know the details. But there was no good chance to do it during my studies.

So I spent some time here and there reading and implementing RFCs. I wrote it in Ruby and the code is rather basic but gets the job done. I took care to avoid anything that wasn’t part of Rubys standard library. And the resulting Ruby script has no dependencies at all. My fiend and I use it since then and a few more people picked it up over time.

A look at event loops and multithreading paper

Status: finished Tags: paper, eventloop, c

Event loops are one of the most successful design patterns for interactive and server applications. Yet they don’t get much if any attention. Programmers are seldomly aware they’re working with one and blocking the event loop is probably one of the most popular errors programmers do. Browsers made that harder by not exposing functions that can block the event loop but programmers still try anyway. Sometimes by sheer force of raw CPU time.

In this project I took some time to think about event loops and threading. The result was a short paper with about 6 pages exploring multithreading of a single event loop. I wrote it for one of the lectures of Prof. Kriha. If you’re interested in that topic feel free to take a look.

2nd events.mi website website to watch live-streams and archived videos

Status: maintained Tags: webdesign, html, css, javascript, php

This website was a complete rewrite of a student project. As a subproject of events.mi a group of 3 students created a website to watch live-streams and archived videos. The result was difficult to handle the HTML and CSS code was rather brittle. Most decisions of the students were sound, it was mostly just a lack of experience. And how should they get it if not with projects like these. The design was responsive and basically ok, too, just not implemented well.

However we needed something robust and maintenance-free so I rewrote the thing within 2 weeks. No experiments this time, just a basic get-it-done effort. As with the first events.mi website all data was stored in a directory hierarchy and managed via SSH. Data handling was centralized in two simple functions, one to fetch events, the other one to fetch talks. Both functions had a few options so different parts of the website could easily query what they needed. Feature wise I added a fast search function, made sure the website could be navigated with the keyboard (tabs) and that special users could announce and edit events directly on the website. The search simply fetches the relevant data of all talks as one large JSON and then searches trough that in the browser. We only have hundreds of talks so the transfer and search times of that were negligible.

Unfortunately I haven’t had time for the final beauty pass of the design to add subtle drop shadows for contrast, tune the layout and margins, etc. So the design is still somewhat unfinished. Quite a while later a student added some pieces here and there like a notification sound when viewers were in fullscreen mode and someone wrote a chat message. At some point we somehow got the attention of some people in Scotland and a friend requested an English version of the website. All that experience with SimpleLocalization came in quite handy and I got it done within a day. But just German and English and without a language picker though. The language is automatically selected based on the users browser preferences.

Pictures

smeb Matroska HTTP streaming server

Status: maintained Tags: live-streaming, c, http, matroska, poll, network

A small (comparetively speaking) subproject of events.mi. The MP4 container was my choice for previous projects because of it’s widespread use and support. But for live-streaming that container format has some serious flaws: You need to know the size of all the playload before writing the header (later remedied by introducing fragments). Timestamp handling is also rather more complex and rigid than it needs to be. The “solution” most came up with was to slice the live-stream into small files (with known length) and piece them together again in the browser.

When Google introduced WebM I took a closer look at Matroska (the container format used by WebM). I was quite impressed. In contrast to MP4’s scattered, hard-to-read and paywalled specs the Matroska spec was a single rather short website (for a spec). No unnecessary prose, to-the-point explanations and more importantly they just solved the problem at hand (in a rather elegant way). But one special paragraph caught my eye:

There is only one reserved word for Element Size encoding, which is an Element Size encoded to all 1's. Such a coding indicates that the size of the Element is unknown, which is a special case that we believe will be useful for live streaming purposes. However, avoid using this reserved word unnecessarily, because it makes parsing slower and more difficult to implement.

You see, in MP4 and Matroska the video is stored as a sequence of nested data blocks. Each block following this simple layout: Type, size and content. For a normal video this somewhat looks like this: Header( VideoTrackInfo AudioTrackInfo ) Data( VideoData AudioData VideoData AudioData … ). When you transmit an MP4 video via HTTP you have to know the size of every block. Plan to stream a 4 GiByte video? You have to know that when you send the start of the Data block to the client. That’s the reason why everyone sliced MP4s down into smaller parts. So you know the size of those Data blocks when sending the part to the client. There are workarounds for this but they’re neither simple nor robust. A lot of things can break.

It took me a while to realize what the paragraph in the Matroska spec really meant: You can set the size of those Data blocks to “unknown”. Then clients simply pick up video and audio data for as long as the connection stays open. After sending the header and start of the Data block you can just stream the video data through the HTTP channel (basically just TCP at that point). This simplified matters greatly. Way less moving parts, simple data streams and much less to maintain.

I did some preliminary testing and was equally surprised. I expected that no video player actually supported this but quite the contrary was true. VLC had no trouble, ffmpeg worked just fine and browsers with WebM support also worked (they pretty much use ffmpeg). All this didn’t require any complex logic on the client side: There it’s just a simple video that is progressively downloaded. Only by “pure coincidence” the data arrives just when it should be displayed. This gives users a nice fallback plan when a browser can’t display the video: They can just use a media player like VLC to open the video URL. That’s especially handy on mobile devices where browser support is buggy and unreliable.

With all that I had the last missing technology I needed for a simple maintenance-free live-streaming infrastructure. One where you could just grab the data from several cameras and mics, composite and mix it, encode it and send it to the clients via HTTP. All in one continuous pipeline. smebs purpose was the last part: Taking Matroska data streams and broadcast them via HTTP to any connected clients. It did a bit more later on and patched the timestamps in the Matroska stream when necessary. If a producer died another producer could continue to send data to the connected clients. And they wouldn’t disconnect because the time stamps continued just like expected (while the new producer restarted them at 0).

Apart from the primary purpose I also used the project for a little experiment. It was just a prototype so I gave goto a test drive. In a previous project I already used it for small state machines with about 50 lines of code. In this project I wanted to implement the entire client handling as one large goto based state machine. I wanted to know if it made the code simpler or more complex and how readable it would be. Performance wasn’t a concern.

The result surprised me. The code itself wasn’t all that bad but buffer and variable management was. Also goto was the wrong tool for the job since I had to store the state identifier (instruction pointer) for each client. I was forced to use a computed goto and a big switch statement would have been better for that (just like in a bytecode interpreter). Even after several weeks it was surprisingly easy to read the goto code. But as with every state machine you have to properly document the purpose of all states. The code itself only tells halve the story.

Judging by that case goto receives a lot more hate than it deserves. Especially when you still use normal for and while loops instead of goto. But given the same situation I wouldn’t use it again. First because it was the wrong tool. Second because it’s hard to see what state uses which variable. Next time I would probably try a large switch statement with an explicit struct per state (and maybe a union of all the state structs).

HDswitch realtime video compositing and audio mixing program

Status: abandoned Tags: live-streaming, matroska, c, webcam, opengl

HDswitch was a subproject of events.mi with the goal to replace DVswitch. DVswitch takes multiple DV streams (SD video) and combines them into one single stream. Either by swtiching between inputs or by something like picture-in-picture. But it can only operate on SD video and HDswitch should do the same for 720p and optionally for 1080p streams.

Additionally HDswitch had to embed markers directly into the output Matroska stream. We just needed basic markers like start of talk, end of talk and tile and speaker of the talk. That information would be embedded into the Matroska stream as a simple text track. And since those packets also get timestamps we know exactly when a maker was set. Tools like ffmpeg treat the extra track as a binary track and simply pass the data along. So the markers stay in sync with the video and audio packets in the data stream.

Properly marking the start and end of talks outside of the composing program is quite difficult. You often have an encoder somewhere in the pipeline that might delay the data for several seconds. When you try to do it via the website you also have to deal with the somewhat unpredictable latency of the browsers video player. And those can get up to 20 seconds. And we didn’t have a reliable way to measure that.

But when your markers are simply part of the data stream all those problems simply vanish. Everything stays in sync with no effort. Additionally the server can watch the data stream for those markers and for example start to dump the data stream into a new file when a talk starts (you would have to take care of key frames but thanks to Matroska it’s not that difficult).

Anyway, that was the plan. I built the 720p and 1080p composing part directly in OpenGL. The colorspace conversation was done in the shaders. The parts eating the most performance were actually the audio mixing (thanks to Pulse Audio) and simply writing the uncompressed video stream into a UNIX domain socket. ffmpeg would pick it up from there, compress it and send it to the server via HTTP. This worked pretty well for 720p but with 1080p the event loop spend to much time copying data and started to miss frames (audio was always handled first, had a high priority so to speak). For 1080p it would have made more sense to use libx264 directly in the compositing program and then directly send encoded data via HTTP.

Only a simple GUI was still missing. The most complex part would have been the text input for the talk and speaker name. But I scrapped the project in favor of an OBS based solution. It needed new hardware for 1080p to work and it couldn’t embed markers into the data stream. In retrospect one of my most stupid decisions. In the end I spend a lot of time basically rebuilding almost the entire system around the shortcomings of OBS. I even had to patch the RTMP implementation of ffmpeg to accept the OBS output. The resulting system was rather rigid and couldn’t be improved easily if at all.

iwatch small command line utility

Status: maintained Tags: c

A small utility created in just a few minutes. It’s a small C program that executes a command when a file changes. It uses the inotify Linux API to do so, hence the name “iwatch”. I use it pretty often, depending on the project sometimes daily. Sometimes to update a browser tab when I save an HTML file, or to generate a PDF when saving a Markdown file, stuff like that. Just like netcat it opened up a whole new area of what I can do with a command (reacting to file changes). For many years now it has been one of my trusted tools.

On centralized and decentralized distributed systems paper

Status: finished Tags: paper, scalability, network

A short, biased and thought-provoking 6 page journey through the history of several globally distributed systems. I always wondered how older globally distributed systems like DNS, mail and newsgroups could appear to be so much simpler than modern ones like Facebook. Thanks to Professor Krihas “Ultra Large Scale Systems” lecture I was able to spend a few days on the topic.

Recent distributed systems needed a great amount of research to scale. Older distributed systems achieved global scale seemingly without this much effort. This paper takes a look at the history of several globally distributed systems (mail, DNS, HTTP, Google, Facebook and Twitter) and the differences in their initial designs and how these relate to scalability.

It is found that distribution of operations and authority provides an horizontal scalability layer. But probably at the cost of difficult coordination and slower development.

If that sounds interesting to you feel free to read on.

2nd events.mi live-streaming system and website for automatic publishing

Status: finished Tags: organizing, live-streaming, ffmpeg, vpx, matroska, shellscripting, http, html, css, javascript

By mid-2013 students at our department pretty much expected that events would be live-streamed via events.mi. Whenever there was an event it was simply expected that there would be a live-stream, a chat you could ask questions in and recordings the day after it.

While properly encoded SD quality was enough we came across several notebooks that no longer had a VGA output. Additionally the HTML5 video “kindergarten” (sorry, I meant “debate”) was going nowhere and Ogg was somewhat abandoned. Instead Google introduced WebM with some promising aspects. Additionally the frequencies used by our sound system were no longer reserved for that usage and we got increasing interference by mobile devices. So we set out to built a new live-streaming infrastructure that better matched the new technological circumstances.

This time it wasn’t a one-man project but a larger effort: We needed new hardware, wanted a new website and had to rebuild the entire live-streaming for new formats. This resulted in several different projects:

I was the architect, coordinated the projects and did 4 of them myself. For the most critical of the above points we had two projects each. One done by a student, one written by me. So when either failed we could hopefully use the other. 8 projects in total.

We asked a company to help us selecting the hardware because the could easily buy the hardware from them (not easy at a university). Big mistake. The company didn’t understand what we needed and permanently tried to sell us hardware that didn’t make sense for our system. No, we didn’t want to build a new TV studio, thank you. But in the next meeting they already forgot that. In the end we did get something we could repurpose for our system but it was a long shot from what we needed. On the hardware side HDMI was another problem. It kind of works for connecting monitors but once you have longer cables or try to capture the stream… more often then not it just breaks (I could write an entire article about what doesn’t work with HDMI).

I wrote a video composing program and one of my friends tried to get something similar done with OBS. This revolved around two constraints:

I also wrote a small HTTP streaming server for Matroska videos in plain C. Another student did the same in node.js. In a previous project I spend quite a lot of time working with the MP4 container (almost implemented my own muxer) and I knew you couldn’t do simple live-streaming with it (no fragments back then). But Matroska allowed for a continuous stream of data without knowing the length of it in the header. And with that you could just pipe video data to any browser that supported WebM. No plugin or even JavaScript needed. Later in the project I even added some code to patch timestamps of the transmitted clusters so a producer could die and later resume without losing viewers. Otherwise browsers automatically disconnect when the video timestamps begin again at 0. That simple just-for-fun feature unintentionally saved the entire project later on…

The website was a rather interesting experience. 3 students built it but the result was… frankly not usable. They learned a lot but I had to rebuild it from scratch in 2 weeks. The website didn’t fit the default pattern: We had no database and I didn’t allow them to use huge frameworks for the server code or the design. All the data was stored in a simple directory structure on the server and meta-data was kept in text files next to the video files. This worked pretty well before. Handling files 50 GiByte or larger in a database is no joke and with a database we would also need an administration interface. That interface alone would increase the complexity of the website many times over. Before we could just use SSH for everything. So we stuck to that, even if they didn’t like it. I think for some of them it was the first time they were directly confronted with HTML, CSS and PHP instead of using them trough a framework.

The rest of the projects were mostly shell scripts and one small C program (so everything properly died when one component died). A major difficulty was to tune the VP8 encoder for our purposes and resolutions given the limited time.

All in all I spend about half my time coordinating the projects or sitting in meetings. The other half I spend implementing 4 projects. Plus my “normal” master studies workload. I painfully learned that communication takes a lot of time and is very incomplete when your partner isn’t experienced in the technology. I learned quite a lot about leading projects, conflict management and communication. But given the same resource and time constraints I wouldn’t do it that way again. All the time I spent coordinating I could have spent writing code. And honestly in the end I had written almost all components myself. The one component I abandoned in favor of another implementation was halfway done with all high-risk aspects finished (which the other implementation couldn’t deliver back then and ultimately wouldn’t in the end).

If that sounds somewhat strange: Be aware that I had to work with students (or teams of students) without knowledge of the concepts or technologies involved and no experience with live-events and the robustness that they require. In contrast by then I had used libx264, libvpx, ffmpeg and OpenGL on several C projects and knew the MP4 and Matroska specs more intimately than I care to admit. The students wanted to learn something. Nothing is wrong with that, that’s what students do at universities. But when you need something that works you have to plan accordingly.

I started that avalanche because I saw a combination of technologies that could lead to a simple, maintenance-free and low-manpower solution. The entire plan was to build a prototype of the system in the first stage. After that use that knowledge to rewrite or optimize it for robustness, make it maintenance-free and minimize needed manpower. The first part is pretty much all you can hope for in a student project environment because every student has a lot of other stuff to do besides your project. And you can image what no prior knowledge and massive multitasking means for the resulting quality.

To my great indignation the second round of development never happened. One of the reasons was a bad decision on my part. I scrapped my video composing software in favor of the OBS based solution. While OBS is great for let’s-play streaming it simply didn’t work in our case. I was simply worn thin and accepted that we needed 1080p quality at every price (which didn’t work). Or put in a different way: I was too nice. In consequence we had to buy a new notebook (because OBS did stupid things and needed more performance) and I had to rearrange almost all other projects to make that work. Because that needs time I scrapped my own composing program. In the end we had a rather rigid system we couldn’t really improve on. Short of doing a major rewrite again.

Another big dilemma was that everyone who wasn’t involved with the live-streaming was rather happy with it. The ones who didn’t had to worry about the manpower needed for each event or the robustness of the system. In the end we didn’t get the time to continue development… and were stuck with the result.

Personally this was quite a stressful time for me. A lot of stuff worked out as planned, and a lot didn’t. One major misstep was enough to send everything off course. We did rather well given all the circumstances but the project taught me not to compromise the technical direction of such a project ever again. Once you start to change your overall design because one component won’t do its job complexity explodes. This causes bugs, robustness suffers and you’ll need more manpower to keep everything running. So don’t do that, especially if you have the alternative to use something else that works. Even if it makes one of your friends unhappy.

3rd HelionWeb server VMware based hypervisor

Status: finished Tags: administration, vmware

The 3rd incarnation of the HelionWeb server. We replaced the hardware with a new 1HE server. By that time my friend took care of pretty much all the customers on it. He was much more experienced with Windows based software so we switched from KVM based virtualization (Linux) to VMware to better fit his experience.

The migration again was an interesting experience and went through without much trouble. We phased out some minor services so we had some cleanup work to do. Funny side note: I was shocked to find out that “professional” server hardware could actually take several minutes to boot…

Linux learn map web-based learning platform

Status: finished Tags: teaching, webdesign, php, ssh, shellscripting

A good friend of mine gave a lecture about Linux basics and server administration and I helped out with that. I supervised exercises and also held the lecture in parallel in a different room when we couldn’t fit all students into one.

To cope with the drastically increased number of students and to give more direct feedback I created a prototype of a learning platform. Each student had his or her own virtual machine they would setup during the lecture (configure SSH, install Apache2, configure it, etc.). The website was a list of exercises students had to complete. At the end of each exercise was a “check” button. When pressed a shell script was executed on the students VM to check the progress. If so the student could proceed with the next one.

The project was just a simple collection of PHP scripts. All data about student progress was stored in CSV files so you could easily create statistics and calculates grades later on. Adding new exercises was easy, too. Each exercise was just a text file and a shell script to check if it was done. The webserver had SSH access to all student VMs so it could easily execute the shell scripts there.

After we were happy with the prototype I cleaned up the code. Created some nice classes, encapsulated the code for reusability, the usual stuff. But it was pretty clear that I would only build the initial version and my friend would extend and change the code as needed for the lecture. So I threw all the “nice” code away and revered back to the somewhat “hackish” code. I cleaned that up by choosing proper variable and function names by using whitespaces to arrange the code into semantic groups. I also added comments that explained why the code did certain things (not “what” but “why”). This left several rather long functions but I decided against splitting them up to keep code parts that were relevant to each other together.

This was one of the very few projects where I could directly observer how someone else read and comprehended my source code. He wasn‘t an experienced PHP programmer but understood the basics. He had no trouble understanding and modifying the code. At all. I was somewhat surprised since culturally “easy-to-read code” was not at all what I had produced. Rather it was straight forward code that didn’t try to hide complexity. If something was complicated, so was the code.

I asked him about it later on and the said that he could just read the code line by line. If he didn’t understand a function he would look at the PHP manual. If it became complicated he looked at the comments. Long functions didn’t bother him at all. Almost the contrary: The liked that he rarely had to look at different places to understand the code (in contrast to e.g. MediaWiki).

This really made me rethink how I write code in small projects with few developers. In that context the project was definitely a nail in the coffin for object orientation and stylish APIs.

A capability inspired low level security model based on modern Linux kernels paper

Status: finished Tags: paper, capabilities, linux, c

A short paper exploring how capabilities could be implemented with file descriptors and sandboxing on a Linux kernel. It’s about 3 pages long. That are capabilities as in passing them around and not having to manage ACLs. Not the POSIX capabilities. It also tooks a quick look at the more difficult topics like revoking a capability later on and how that could be achieved. Nothing to deep though.

Note that by now a lot of new stuff was added to the Linux kernel. memfd and dma-buf for example. While the basics of the paper still apply there’re probably better workarounds by now.

Touch table blob detection CPU based touch point detection

Status: finished Tags: realtime, video, touch, c

A fun project of mine. My university had a rather old touch table in one laboratory. It was built by a group of students and honestly the hardware was rather bad. Detecting fingers touching the table was quite unreliable (it was based on an infrared camera). The touch point (blob) detection software they used was pretty obviously built for a different quality of input.

As you can imagine: Challenge accepted. I always wanted to build a specific game for touch tables but that required pretty fast gestures. So I set out to built a blob detection that was fast enough (60 fps), would work with the hardware and would only use one CPU thread (to leave the rest for the game).

Throughout several holidays I spend some weeks on the project and got the blob detection done. It was quite an interesting challenge to figure out what kind of information I could use from the video frames. Even to get the camera to work reliably was a challenge. I wrote a blog post about the image processing part.

Unfortunately the touch table was scrapped before I could really start on the game (just had some basic touch-to-spawn-particles thing). The university needed to free space in the laboratory and I didn’t have space to spare. So that was the unfortunate end of that project.

From a 2018 perspective it would be interesting to use machine-learning approaches to do the blob detection. At least to find a good feature combination and later implement them with proper optimizations. But the real challenge would be to find someone to tag several sequences of 60fps video…

HdM Sammelkrake portal website, information aggregator

Status: finished Tags: webdesign, php, javascript, imap, nntp, xmpp, mediawiki, newsfeed, ldap

While I was studying I had to check several information channels each morning. Was a lecture moved or canceled? Are the interesting events in the next few days? Did a professor or employee announce something important? Over time that got rather annoying and pretty much anyone had the same problem. I decided to create a small website which aggregated the information so you could check everything in one glance. Not to make the chaos of information sources worse users couldn’t store information there. It could only display information from elsewhere.

Of course I also used the project to do some experiments. Back with the first events.mi website I was unhappy with the IMAP and NNTP support of PHP. It wasn’t very reliable, slow and sometimes added error messages directly to the output (probably form some C code). This time I wanted to look at IMAP and NNTP and directly fetch the information I needed (simple reader that could mark messages as read). Both are text based protocols and this turned out to be rather easy (ok, the nested data structures of IMAP required some work). In the end I had something very fast which did exactly what I needed. Because that worked so well I also did the same with XMPP to fetch a list of people that were online in the universities chat service.

Compared with previous projects it was simpler to directly work with the protocols than with the libraries that implement them. You can focus on the parts you need and mostly ignore the rest. Libraries often have a different focus.

The website also grabbed information from various websites and newsfeeds. Newsfeeds were a pretty simple thing: Grab them once a day, extract the information with SimpleXML and prepare them in the format the client needs. That way every visitor didn’t generate requests to every newsfeed (which were often quite slow). Grabbing information from websites was equally simple. Thanks to PHPs very robust DOM parser and XPath you could extract everything you needed in just a few lines.

The most annoying part actually was a MediaWiki plugin that extracted information when certain pages were modified. It then moved the data to the aggregator. This didn’t work all that well. It actually did break once because the MediaWiki APIs changed. The only act of maintenance I had to do on the project over several years. In retrospect I should’ve just grabbed information via the DOM parser and cleaned it up. That would’ve been more robust since the names of HTML tags are less likely to change than the MediaWiki API.

lisp.c Lisp interpreter written in C

Status: finished Tags: lisp, interpreter, C, bytecode

My second Lisp interpreter. Again I wrote it while visiting Claus Gittingers lecture. It’s an AST and bytecode interpreter with support for closures. Almost all of it is covered by automated tests and it uses the Boehm-Demers-Weiser conservative garbage collector library (haven’t had time to create my own). Writing the bytecode compiler and VM was especially interesting. This time I wrote it in C so I could explore the memory management and all the funny little tricks of interpreter construction. See the GitHub page for it’s features (like dynamic library and Shebang support).

Using C instead of Ruby simplified some parts of the code (e.g. the scanner). In some parts (e.g. for testing) the interpreter is a bit overengineered. Mainly because I didn’t know the less frequented bits of the C runtime back then (like open_memstream).

Funnily enough handling the different types of Lisp objects was simpler in C than in my first interpreter written in Ruby. In C a simple union did the job and you had to write all the special cases at the location where they mattered. In Ruby I used a class hierarchy and this scattered the semantics throughout the code (way harder to read).

Scribbels design prototype

Status: finished Tags: prototype, webdesign, html, css, webfonts, transforms

This prototype design was a test drive for webfonts and CSS transforms. They were fairly new back then and I wanted to see how mature browser support really was. The design side was inspired by old-fashioned pencil and paper notebooks and small pieces of paper with notes on them.

Overall browser support was already pretty good. Albeit your really saw the differences in font rendering on different operating systems. As with every web project I encountered some interesting bugs. Different interpretations of line-heights, 1px offsets between browsers and the like.

The most interesting one when combining CSS transforms with negative margins (or maybe it’s a spec inaccuracy). When WebKit based browsers render the page the width of transformed elements with negative margins is minimized as much as possible (including adding line breaks on every possibility). But when you click on a link and then use the back button the page is rendered correctly. Funnily enough the bug still persists 6 years later.

The plains project web-based note organization tool

Status: prototype, maintained Tags: plains, organizing, webdesign, php, html, css, javascript, jquery, rest, usability

A small pet project to clean up my own mess of notes and ideas. Basically you can put everything on a 2D space and group it into "plains". A "plain" is nothing more than a rectangle in that 2D space that can hold notes, ideas and other plains. If much data is arranged like that the spacial sense of direction helps to find data very quickly (zooming is important for that). This structure also maps perfectly to files and directories and therefore the stuff is stored in that way.

lisp.rb Lisp interpreter written in Ruby

Status: finished Tags: lisp, interpreter, ruby, D

The first Lisp interpreter I wrote. At first I wanted to write it in D but got caught in a bit of over-engineering and switched to Ruby. The project actually contains several interpreters that came to be over the course of the associated lecture:

Those were created as part of the lectures of Claus Gittinger about interpreters for dynamic languages. This was the first time I met someone who really understood a language. Not just how to write compact, testable and maintainable software but also how it all worked. From each character and language construct down to each executed assembler instruction. Not in theory but he actually did all of that himself to built Smalltalk/X. It was a kind of revelation for me to finally have someone you could ask pretty much anything about interpreters and dynamic languages. And the didn’t only knew the answer but could also explain it well.

My personal focus was on building a continuation based interpreter. It took some brain rewiring to get it done but such a radical change in perspective was quite an experience. Want to do endless recursion instead of loops? Now you can and it’s just as efficient. Exception handling with call/cc? Can do. Scribbling an entire page full of diagrams and arrows to debug 15 lines of code… yeah, that too. I didn’t put much effort into the interpreter infrastructure needed for debugging and that got at me in the end. All interpreters use the garbage collector of Ruby. It didn’t made much sense to implement one when Ruby already does all the work anyway.

This was easily one of those projects where your understanding grows by leaps and bounds. Interpreters were no longer a blackbox to me. I started to understand many of the strange aspects of some languages like JavaScript because I had to make the same choices: The easy but awkward way that doesn’t make much sens or back to the drawing board to spend some days to figure out a consistent solution that makes sense.

Garbage collection was another one of those blackboxes busted open. Within one hour we went from “magic” to “I really want to implement a Baker GC now”. Not simple, but not magic either. I built one on a later project and you have to make some tough architecture choices (especially on how to integrate native code) but you also have a lot of tricks available to help you.

I could continue the list with stuff like JITing (just in time compilation). But instead I’ll just end with that: If you want to understand interpreters visits the lecture and build one yourself. It’s easier than you think.

Spacecraft 3D space shooter

Status: finished Tags: game, programming, D, network, gameplay

I don’t really remember how this project got started. We were lamenting the lack of interesting multiplayer space shooters and somehow ended up creating one. Not that I lacked projects but you don’t get the chance to make a space game together with good friends all that often. We were a team of 3: Michael Zügel (artist), Benjamin Thaut (programmer) and me. Andreas Stiegler served as the university’s project supervisor (and also created the fighter model).

Benjamin and I were both fans of the D language at that time so we wrote the game from scratch in D 2.0. We used libraries like SDL and Assimp but no engine or the like. Benjamin wrote the renderer (OpenGL), 3D audio (OpenAL), resource management, text rendering, a particle system and pretty much all of the math code (collisions, etc.). I took care of networking (TCP client and server), HUD, gameplay programming, gamedesign and leveldesign. Michael created pretty much all assets, from space ships and stations to asteroids and skyboxes. He also wrote some tools to process assets.

We got the thing done in about 3 months… but I didn’t get much sleep towards the end. The teamwork as pretty interesting and I learned that there should be only one architect in the team. At least if you want consistent code. It took a while for me to let go but after that it worked pretty well. But I have to admit that the code got pretty ugly towards the end (the deadline was looming).

In the last weeks we also became aware of a rather unsettling bug: After 10 to 15 minutes the game would crash on Windows. Everything was fine on Linux. Turned out that there was a race-condition in the 32-bit Windows subsystem that caused the D garbage collector to crash (it suspended a thread but Windows screwed up). Usually this didn’t seem to be a problem since the garbage collector would only run occasionally. But in our game we had to call it in every frame (that cost us 10ms…), otherwise the game would stop for a second every few minutes. The D compiler could only generate 32-bit binaries back then so we were stuck. It wasn’t much of a problem during development or at demonstrations but it was a bummer while playing with friends. A nice reminder that a bug in the foundations you build upon can undo your whole project.

In retrospect I’m still pretty impressed by what we got done. I learned a lot about teamwork, realtime networking, gameplay coding, LUA and a many other things. Each of the things we did seems rather trivial in isolation (properly importing models, handling accurate positions, reducing latency, avoiding gimbal-lock in the controls, etc.). But the massive number of those details that made up the game with all its assets was staggering.

NNTP-Forum web-based newsgroup frontend

Status: maintained Tags: nntp, php, programming, javascript, jquery, html, simplicity, atom, university

THe NNTP-Forum is a small and lightweight frontend to NNTP a newsgroup server. If you already have a newsgroup server running you can use it as an webinterface.

The development started with a small 2 day read only prototye. After some positive feedback I wrote a new frontend wiht all the usual stuff (reading and posting of messages, attachments, …). As with most of my web projects I used a very simplified and condensed version of the model-view-controller pattern to structure the code. Thanks to many well written RFCs it was easy to use the NNTP protocol directly. Same goes for the parser of the mails. However I built a state machine based parser. It only works on individual lines and does not need to load the entire mail. This is important to efficiently handle messages with large attachments.

Shinpuru testing library

Status: maintained Tags: php, testing

Born out of laziness and fun testing out PHPs anonymous functions Shinpuru developed into a full testing framework within just 3 weeks. It was an interesting challenge to see what can be accomplished with just PHPs own build in functions because I didn't want any external dependencies. Despite that limitation Shinpuru contains pretty much everything testing frameworks out of the Ruby world (e.g. Shoulda) usually do.

The Shinpuru project was also the testing ground for a new approach to documentation: all documentation data is contained within the source file and extracted when needed by the website. All examples are actually part of the test suite that covers Shinpuru itself. This gave birth to a sister project to extract patterns out of PHP code. Because of this and the unusual large documentation it took about 7 weeks to just write the documentation and examples as well as building the documentation system.

HTML obfuscator web-based utility

Status: finished Tags: html, unicode, spam, javascript, jquery

A little JavaScript tool done in just a few hours (styling takes time…), primarily because I wanted to escape my own mail address but didn't found a nice tool for it. Libraries like Markdown give you the exact same (if not better) output if you write down your mail address but I don't want to fire up a Markdown processor for just a little text mail address. Plus, you can encode arbitrary text with this tool, e.g. your Jabber-URI and Skype name.

Simple Chat maintenance-free chat without a database

Status: finished Tags: php, javascript, jquery, simplicity

The very simplest form of a chat one can get. This is the refined version of the chat used in the GamesDay projects but SQLite got kicked out in favor of a simple JSON text file. This chat needs no maintenance, nor resources and it's all in about 50 lines of code.

A simple design kept simple, flexible and extensible.

Arkanis Development v3 personal homepage

Status: finished Tags: arkanis-development, webdesign, simplicity, php, html, html5, css, css3

The third incarnation of my website. The main feature of this project is its simplicity: a few hundred lines of very simple PHP code, no external dependencies, no database, no fancy JavaScript… every bit of added complexity in this project caries more than its own weight. This takes maintenance down to almost zero. In a time where everyone seems to add complexity without thinking properly about it I wanted to know what can be done with only basic means.

This project also builds upon the new HTML5 semantic tags and uses CSS3 styles for almost everything in its design. Box shadows, rounded corners, transparency, HSL-colors, table positioning, etc. This page shows to a good degree what's possible if you ditch the old browsers and use the new stuff. It also makes webdesign simple again. No program like Photoshop or Inkscape was used to craft the design, it was created directly as of HTML and CSS code. Only GIMP was used to resize some images. :)

But this isn't where simplicity stops: there is no more user management and no admin or authoring area. The website is a simple frontend to the data stored in some equally simple text files (like this project description). New posts can be written directly with my favorite text editor on the server using SSH.

1st events.mi live-streaming system and website for automatic publishing and cutting

Status: finished Tags: live-streaming, ffmpeg, shellscripting, network, netcat, webdesign, html, css, javascript, newsfeed, nntp, atom

This turned out to be the start of another one of my long-time meta-projects. Students and employees of the university organized various events like the GamesDay, LinuxDay and WebDay. Those were very small one-afternoon conferences with 3 to 5 talks and we live-streamed the whole things. At first a few friends and I organized a GamesDay and after that I started to regularly help out with the live-streaming of the talks.

The streaming system was WMV (Windows Media Video) based and had a few problems: The quality was rather bad, captured slides were often unreadable and cutting almost never worked properly. Also most viewers in companies were behind restrictive firewalls and couldn’t watch. We usually didn’t have a live-chat so it was difficult to get feedback from users during an event.

After digging around for a while it became apparent that some of the problems where caused by bad configuration (e.g. bad resizing or wrong hardware settings) but others were pure software problems. Even the WMV SDK back then was buggy. The timestamps created by the Windows Media Streaming server were pretty messed up and caused bad audio-video drift in later processing. Well, challenge accepted, and I set out to take the hardware and rebuild the software from scratch based on what was used at the DebianConf.

Quite a lot of time went into reading manuals and configuring the hardware properly. On the software side I ended up using DVswitch to combine multiple sources (camera and grabbed notebook) into one stream. From there it went to a server that encoded the DV stream as Ogg Theora and Vorbis. Finally that stream went to an IceCast 2 streaming server that distributed the stream to any connected views. The server also recorded a dump of the DV and Ogg streams for automatic publishing later on.

Coupled with that live-streaming system was a website via which users could view the live-stream (you could also use many video players watch it). Older recordings were also published via the same website. It also gathered news from various newsfeeds, local text files and a newsgroup. The idea was to give users an overview of pending events that way.

With that project I also took a somewhat unusual approach to cutting the raw stream. The server automatically published the entire uncut recording and everyone with an university account could mark timespans with a title, speakers, description and attachments. Those were then shown as extra videos. This allowed students to mark their own talks and was meant to alleviate us from cutting all the talks. In the beginning this worked quite well but later on only a few students actually took the time to do so. Anyway it was still way more efficient that what we had before (cutting WMV videos with broken tools).

The live-streaming part of the website was kind of simple. Thanks to Ogg Theora and IceCast we could just use an HTML5 <video> element to display the live-stream. As soon as the next keyframe came around the corner the viewer would see the live-stream. A small SQLite based live-chat was also part of that page so we could chat directly with our viewers. Latency varied (usually around 1 to 10 seconds) especially since some browser-support was still buggy. But back then viewers started to ask questions via the live-chat and we could simply ask them in their stead. I later replaced that live-chat with a simple text and JSON based one to avoid the SQLite bottleneck.

As you might have guessed by now the project was rather huge and I dedicated almost 4 month to it (not counting maintenance, regular debugging and later improvements). Properly configuring the hardware, building the live-streaming server, the website and not to mention a lot of glue-code to make it all work nicely together. We barely had manpower for the regular live-streams so either it was fully automatic or it simply wouldn’t happen in the long run.

Those were the beginning days of HTML5 video and back then there was a fierce discussion about MP4 + H.264 + AAC vs. Ogg + Theora + Vorbis. The first showing better overall quality (at least in H.264 main profile) while the latter could be included in every browser without paying license fees. By then I had done a lost of testing and the website also encoded an MP4 file Internet Explorer, Safari and even Windows Media Player and QuickTime were happy with (of course everyone had their own ideas which information an MP4 should contain). But in the end the decision was made easy for us: The Ogg container was built for live-streaming, MP4 was not. Streaming with Ogg worked out of the box and was simple while MP4 required an insane amount of complexity to make it work (including Flash). Also with Ogg viewers behind restrictive cooperate firewalls could watch without trouble. The only drawback of Ogg was that the Theora Encoder was clearly inferior to x264 and tuning it wasn’t quite as simple.

Another big aspect of the project was error detection and recovery. Working under live conditions can be quite stressful and complex debugging can be almost impossible. You can’t just say to 50 people that you need 15 minutes to figure out why FireWire just broke or a server decided to throw a tantrum. The show must go on, especially when you’re on a schedule. With time and some pretty painful experience I learned to avoid complexity simply because it leads to complex errors. Better to have a system that breaks down with an error when it happens so you can react instantly. A few seconds of lost video are unfortunate but no big deal. But realizing that something went wrong and you just lost the recording of the last few hours is.

Sounds simple but wasn’t. I mostly used command line tools and shell scripting to build the live-streaming pipeline. And most of those tools had different approaches to error handling (or none at all). The source code of the libraries contained all the necessary error codes but many command line tools simply threw them away. Just to get the system to properly die once one component in the pipeline failed required some very careful engineering. I replaced a complex data transfer between the streaming notebook and the server with plain TCP (netcat) simply because it was really god a dying when the network went down.

Anyway, the system was in constant use for many years until we developed a newer one. During that time I constantly improved and tuned it. By that time’s standards we were pretty good given that most large conferences had way more trouble with their systems. They also needed way more manpower than one student with a 15 minute introduction into how to handle the camera and when to replace the batteries of the microphones.

Pictures

Asteroids small cross-platform engine and game written in D 1

Status: finished Tags: game, programming, D

Another small university project. The result wasn’t particularly mind-blowing (it’s Asteroids, what do you expect?) but it introduced me into the real-time aspect of programming. You just have 16ms to do your thing each frame so you have to think about the algorithms and data structures your’re using.

Of course hopelessly over-engineered the engine was based around a tree of game objects. You could disable parts of the tree and manipulate it in the background (e.g. to load a new level on the fly). It was somewhat inspired by HTML and I used it to manage various menus and levels and to switch between them. Collision detection and resolution was a pain (it always seems to be) and I took a shortcut to avoid implementing full text rendering. Building a particle system however was quite fun and it made a huge (and unexpected) impact in immersion. I only had about two weeks to built the game so I had to restrict myself to what I really needed.

As always I used the project to conduct a few experiments. First I wanted to build a rather abstract game to see how much players would actually use their imagination to fill the game world with details. Animating an epic space battle takes an insane amount of time and effort so why not use the player imagination to make it epic? That’s why I choose Asteroids and it worked quite well. People knew what to expect and set themselves up for the mood. The game had no sound (no time for it) but some players soon started to make their own sounds.

The second experiment was about menus: They were represented through in-game objects. In the main menu you could fly to various landing pads, e.g. to start a new game or exit it. This forced most players to learn the basic controls before they could start a game. But showing the help about controls in the lower right corner was a bad idea since most players didn’t see it there (they saw the ship first). For whatever reason Alt + F4 didn’t work on Windows and this annoyed some players. Especially since that combination is usually used when they're already annoyed by the game. All in all it slightly mitigated the break of immersion that menus usually entail and helped teach players the basic controls. But I’m not sure it’s worth the effort or just something “different”.

The third “experiment” (if you can call it that) was to see how much fun Newtonian physics are in a top-down game. The player ship was simulated with a certain mass and had thrusters of varying strength to translate and rotate itself. What combination of thrusters is fired based on mouse and keyboard input became rather complex and I’m not sure it was worth the effort. The controls were pretty intuitive and players didn’t notice how they moved. So a lot of time went into something players didn’t even perceive. Another problem were projectile velocities: Slow projectiles create dynamics (e.g. curved arcs) that’re unexpected for most players. Yet faster projectiles make the game quite boring. In the end I think that combination didn’t work out for a straight top-down shooter. At least not as the one-and-only core mechanic. Adding terrain, obstacles or some evading AI would probably solve that. Then you could make the projectiles quite fast without making it uninteresting since the focus no longer just lies on lining up the shot.

The game itself had pretty much no replay-value and was rather shallow. But it wasn’t designed to be a great game but an experiment. And I learned a lot about the topics I was interested in. Later on I added a demo mode to record and play back user input to show the game at one of the university’s GamesDays. There the liberal use to random numbers in the game code came back at me. Some ugly hacks were needed to make it “mostly deterministic”. But in the end it was good enough.

7th GamesDay website and event organization

Status: finished Tags: organizing, games, university, webdesign, html, css, javascript, php, sqlite, atom, news-aggregator

Like the 6th GamesDay this project was too about organizing speeches and talks. This time however we had two month to pull the thing off. That time frame really made the job easier but also had some negative effects on the motivation: everything took so long… e.g. we had good chances to get CryTek for a talk but after one month of back and forth we finally dropped that.

Apart from all the organizational stuff I also created a website, again based on the poster designed by Darius Morawiec. I reused most of the previous GamesDay website but added an small Atom newsfeed reader that fetched the news for the website from a Redmine project we used to coordinate the GamesDay.

6th GamesDay website and event organization

Status: finished Tags: organizing, games, university, webdesign, html, css, javascript, php, sqlite

The GamesDay is an event at the HdM Stuttgart where some people talk about different aspects of the games industry. It took Darius Morawiec, Martin Schmidt, Matthias Feurer and me two weeks to organize the 6th of these events from scratch. Two weeks with absolutely no spare time but it was an interesting experience.

Besides organizing the event (I have over 200 mails in the archive…) Darius designed the very beautiful poster and flyer I later adapted for the design of the website of the 6th GamesDay. It just was a simple website, little more than some static pages wrapped into a common layout using PHP. However later on a simple PHP and SQLite based live-chat (about 20 lines PHP and 50 lines JS) as well as an automatic picture gallery were added to the page. The event was recored and live-streamed and thats where the live-chat was made for: it enabled the online viewers to asks questions which were forwarded to the speaker.

This "one page, no maintenance" live-chat proved very robust and usable and I reused this component in some projects afterwards.

ImmoHessen real estate website

Status: finished Tags: rails, sql, webdesign, html, css, javascript

The parents of a friend run a real estate business. The website was ready for a make-over and so we got together to build a new version of it. The design was created by a designer and I got to work to implement it (it had to work with IE6). The website itself was based on Ruby on Rails. In retrospect not the wised decision because it didn’t quite fit and it made continuously importing data and maintenance somewhat annoying. Most of the website was not much more than a pretty user interface for search queries. All changes were imported daily from another backend. After trying several approaches I created an restricted REST interface for the import but you can imagine that this was neither fast nor resource efficient.

The project itself was quite interesting and the teamwork was fun and productive. IE6 made the design an interesting challenge. And we explored parts of Rails’ database code to make it work and the performance good enough. My friend maintains the website and server since then and sometimes I help out to handle a few incidents.

Like some other projects before this again made me think about frameworks. Are they really worth the trouble? If you look at it as a whole the website was actually a rather simple thing but we made it quite complicated by thinking the Rails way. In the end this was my last major project with Ruby on Rails. By then maintenance of several Rails applications took a noticeable chunk of my scarce time. Over the next few years I replaced them with way simpler and mostly maintenance-free code (mostly PHP).

Pictures

2nd HelionWeb Server web hosting service and Linux KVM based hypervisor

Status: finished Tags: kvm, virtualization, shellscripting, apache, sql, php, administration

The second incarnation of the HelionWeb server. We decided to replace the server hardware roughly every 3 years and so we built a new server. But on the software side there was a greater change: Virtual machines became something that actually worked and was fast enough. So the server became an KVM hypervisor and the web hosting service just became a virtual machine. We created virtual machines for each of us (and later for another customer). That way we didn’t have to worry about breaking the web hosting platform when you changed the Apache2 configuration for a new website.

Again, it took us a few weeks to finish the new server. We no longer had just one server to configure and test but three instead. Also backups became way more complicated because we had to move the data over a private network within the hypervisor. KVM, the Linux kernel support for virtual machines, was pretty new back then and also had its fair share of quirks. For example one of the VMs died with a kernel panic every night. So for a while I got up every night to observer the server and VMs to figure out what caused it. Turned out that CPU frequency scaling (also kind of new back then) was the culprit and pinning the frequency solved it. But what a way to spend your nights… monitoring logs in the hope to see a hint that didn’t make it to disk because the VM died before flushing it’s buffers.

What surprised me the most in retrospect was just how much administration overhead and highly complicated bugs VMs can cause. Sure, you get a lot of flexibility from them but it made me wonder if it really is worth the effort of if there isn’t a simpler way to achieve the same.

SIM2 issue tracking system

Status: finished Tags: rails, webdesign, html, css, javascript

Another Ruby on Rails based issue tracking software for a friend of mine. This time for his own newly founded company. In its core a basic issue tracker but with multi-client capability built in (“mandantenfähig”). Different customers had their own little portal into the system from where they could open issues and track their status. We also added an interface and tool to scan a companies network structure and what software licenses where used where. That way customers could see when they had to buy more e.g. MS Office licenses. We even built a little SMTP interface so you could open issues just by writing a mail. But I’m not sure if that part was ever deployed (mail server integration can get tricky in some companies).

We developed it together and it was quite productive cooperation. I kind of missed that at my old workplace where it was more like every man for himself.

Table Navigation jQuery plugin

Status: finished Tags: jquery, javascript, html, css

This plugin allowed you to navigate through tables using the arrow keys on the keyboard. This allowed users to quickly work with long tables efficiently: Use the arrow keys for up and down, enter to open an entry, backspace to go back to the table. I wrote it for the issue tracking software of a friend but it somehow gained popularity.

Thanks to much feedback the plugin saw active development for about a year. It was quite interesting what workarounds were needed to make the browser remember the last selected row. In the end I had to use cookies to make it work across browsers. Maintenance continued for a few years but eventually jQuery changed, the plugin broke and I lacked the time to rewrite it.

OnWork issue tracking system

Status: finished Tags: rails, webdesign, html, css, javascript

The first issue tracking software I wrote together with a friend. We wrote it in Ruby on Rails and it was just a basic one without much bells and whisles. Unfortunately I don’t know how well it was received. I just helped to create it and contributed my Rails expertise. My friend took care of deployment, feedback and maintenance.

Arkanis Development v2 personal homepage

Status: finished Tags: arkanis-development, webdesign, rails, html, css, javascript, xfn, ie

This project was everything Arkanis Development v1 should have been. The "basic" blog was extended in almost every way: articles and static pages where added, a newsfeed came and Textile as well as Markdown found it's way into posts and comments. To be more honest: it was totally rewritten anyway. In addition the entire application was covered by test cases.

The most visible change however was the design. As with the Rails application the experimental never-finished design was replaced with a proper one. Well, not just one but tree in fact. All inspired by some photos found on Flickr. These designs were serious ones: accessibility, clean markup (good for search engines), browser support (Opera, Firefox, IE 7, 6, 5.5 and 5.0) as well as speed were taken into account. The designs were changed depending on the daytime and could be switched manually by those who like to play.

Arkanis Development v2 was more of a content management system than a blog. The administration interface featured some nice details like XHTML Friends Network integration and auto-growing text boxes. However I rarely used all of the possibilities I build into this website.

SimpleLocalization and Rails i18n Ruby on Rails plugin, workgroup activity

Status: finished Tags: rails, ruby, webdesign, html, css, organization, administration

SimpleLocalization started out as a collection of utilities I used to built german Ruby on Rails applications. Rails does a lot of stuff automatically and some pieces of it (like error messages) are only generated in English. SimpleLocalization was a plugin that modified those parts to support other languages. It put all the language specific data into language files so you could easily create and maintain them. Since I also created websites that had to offer a German and English version I added features to switch between languages and to use the language files to store your own translated strings.

Rails was pretty new back then and the community was quite small. People seemed to like it and it became somewhat popular. Some contributed language files (even one for “Pirate Swedish”) and others joined development, contributed entire features or improved the existing code base. There even grew a small community around the plugin and I hosted a Beast forum and Collaboa issue tracker and wiki to coordinate our efforts. I spent a lot of time writing documentation, learned a lot about the inner workings of Rails and was quite impressed by what you can do with Ruby. Also this was the first project I wrote pretty much complete test cases for. It was a private project so I haven’t had the same time constraints as at work.

Later on Sven Fuchs contacted me to join the Ruby on Rails internationalization effort. He gathered the authors of almost all Rails i18n plugins and we set out to create a common i18n API we could contribute to Ruby on Rails. All i18n plugins could then use that API to hook into Rails. This way we could collect all the patchwork into one place and better yet could maintain it with Rails itself. We were about 6 people (but I’m not sure) and had some very interesting and productive voice and IRC meetings. Back then I had no real idea of how different languages could be… or how complex (take a look at Ukrainian pluralization rules).

I invested about 2 years into the project and it gave me something meaningful to do during my civil service (apart from doing errants, fixing light bulbs and having to see old people wither and eventually die). I even had the chance to visit some Rails conferences and get to know some interesting people. But after my civil service time grew scarce and I had to end the project. By then a lot of the functionality was already covered by the official i18n plugin created by our working group.

But even though it still was a difficult decision. In essence it was the choice between founding a company for professional Rails development or going to a university to study for a few years. I took the later choice because by then I knew that there were aspects to programming I could never explore in a company. There you can only experiment in the narrow scope of a project and at the end of the day you need something you can ship. You don’t have the freedom really question what you’re doing and to look at it as a whole. I enjoyed the project immensely and learned a lot but ultimately the drive to explore deeper aspects of programming won out.

1st HelionWeb server web hosting service and website

Status: finished Tags: apache, sql, php, shellscripting, administration, organization, webdesign, html, css

When Ruby on Rails became popular it was difficult to find a web hosting service to run Rails applications. However a good friend of mine was an avid system administrator and my brother knew how to tune systems to run 24/7. So someday my friend came up with the idea to start our own web hosting service. A lot of software was already available to get the job done (in this case SysCP) and previously I often had trouble with the limited software stacks provided by hosting services. So we set out to build our own server.

The hardware was an burn-in tested 800 MHz system with a few hundred megabytes of RAM. My brother expertly modified the case with a drill to avoid any heat build-ups and optimize the air flow. We even invested into a special server network card. It was the most expensive component of the entire thing. Funnily that was the only component that ever failed and had to be replaced. With a blank Debian Linux the speed of the system was actually quite impressive compared to what we were used to by hosting services.

On the software side the system was basically an more or less complex Apache2, MySQL and PHP setup combined with a SubVersion and mail server. Most of the configuration was generated by the SysCP control panel. But most importantly for me we had full control over the Apache2 webserver. I could finally my Ruby on Rails applications the way I wanted. Without the need to buy an expensive extra server for them (no cheap virtual hosting back then either).

All in all it took us several weeks to put it all together, make it robust and test it thoroughly (including automatic backups and recovery). Then we put the server into a data center in Frankfurt and switched it on. Quite an unspectacular climax of the project. The server ran almost flawlessly for about 3 years until we replaced it with the next server. The only hardware incident being the network card breaking down. From then on I could pretty much host anything I wanted and this allowed me to do many things with Ruby on Rails that would have been very difficult otherwise.

After we put the server online I created a simple website for it. Mostly to give our customers easy access to webmail, phpMyAdmin and the control panel. It also contained general information, a totally unnecessary news system and (for whatever reason) a SubVersion tutorial with comments. I used that website to test out transparent PNGs in IE6. What worked, what broke and what bugs they caused (e.g. unclickable links).

But the most I remember from the project is the incident handling. When the server stops working you really have to resolve the issue and get it back up. Others rely on your system. Even if that meant to drop whatever you were doing and driving for several hours to the data center to replace some hardware (fortunately it never came to that). Lucky for me my friend took care of the mail server side and SPAM related stuff. But we had our fair share of funny scripts being uploaded or software that was pretty much insecure by design (take your pick…).

I already had deep respect for good system administrators (unfortunately not common among programmers) but during that time I experienced these challenges first hand. Admins really are the unsung heroes of the modern time, making things work that (as a programmer) simply make you bang your head against a brick wall.

Pictures

Arkanis Development v1 personal homepage

Status: finished Tags: arkanis-development, webdesign, rails, html, css

The first incarnation of my website and a basic blog with posts and comments. While the work on the design started as early as September 2005 and wasn't continued for many months the "real" website was created in a few days. It wasn't much more than a repurposed Ruby on Rails application created while learning Rails.

12FAW class forum phpBB forum admin

Status: finished Tags: php, administration, organization

In the last year of my apprenticeship as programmer my entire school class consisted of programmers. In a German apprenticeship you do regular work for a company but also spend 1 week per month at a school to supplement your practical experience with theoretical knowledge. Anyway, as you can image with 30 programmers in one room it wasn’t the question if we needed a website but who could host it. I ended up to be the one and took care of maintenance and moderation for over a year until the end of my apprenticeship.

DHLP page prototype design, phpBB template and wiki

Status: abandoned Tags: webdesign, html, css, php, regexp

This was one of those projects where someone asks you for some feedback, you see a solution in front of your inner eye and start to build the thing. A friend asked for feedback about a redesign of DHLP (a German Half-Life community). I liked the idea of a Half-Life 2 themed forum and created a quick prototype design.

This was the ideal pretense to test out a “flexible design”. A design where all sizes are relative to the users font size. When the user changes the font size so does the size of the entire site. Back then pretty much every browser except Opera didn’t have a proper page zoom. Instead they scaled the font size. Hence a flexible design would allow most users to zoom the page in a useful way. That worked pretty well and the design was targeted at IE 5.0 and other newer browsers. And creating rounded corners wasn’t a simple thing back then. Especially since you had to avoid some nasty IE bugs.

After that I kind of got carried away and started to create a phpBB theme based on the design. The funny part here was to first build a complete XHTML template that used appropriate HTML tags instead of tables or a div soup. For some reason I also started to build a wiki to finally test some more complex regular expressions to parse BBCode. All in all a pretty nice project. I don’t remember why but with time the project lost traction and somehow simply stopped. I regret not finishing this one because it was almost done (down to themed forum icons).

Pictures

ZGR website and local gaming community

Status: maintained Tags: webdesign, html, css, php, sql, organizing

One of the projects or rather meta-projects I spend the most time on. It started out as a phpBB forum for our local gaming community, grew into a quest to build yet another (by my first) web-framework and ultimately into more separate sub-projects than I’m bothering to count. Later on I also did pretty taxing management stuff to organize our quarterly LAN parties and keep them fun for most of us (albeit often not for me).

Technology wise it started out as a phpBB theme but quickly grew into an event organization system based on phpBB. This also gave rise to my own very first web-framework. In retrospect I spend a year taking the core ideas of phpBB and rewriting them according to my own flavor and with added sugar coating. After that I tried to build a website with it… and found out that it was an almost unusable mess of “architecture”. That’s what I got for building systems that should be useful but didn’t solve a specific problem.

When Ruby on Rails came into vogue I spend another year rewriting the thing in Rails. Another year making the problem fit the framework… needless to say that it didn't pan out. I learned a lot during these projects (PHP, SQL, Ruby, Rails, complex HTML and CSS) but most importantly that I have to look at a problem properly. If I don’t do that frameworks will likely become a problem, not a solution. I still explore that frontier 15 years later…

Anyway, the most interesting part of ZGR was the social aspect. We were young people from many different parts of society and with different ways to play games. The LAN parties of the first year felt more like education sessions where some people taught others how to properly install and maintain their system and hardware. At first I even had a stock of network cards because regularly people turned up without one (they weren’t on-board back then). It was a great time with a spirit of comradeship, especially when the German mass media did another witch hunt and portrayed gamers as rampaging maniacs training to kill you (which we were not, in case you’re to old or young to know).

Somehow I slipped into being the manager and organizer of the community. I’m not really sure how that happened but I took care of finances, inventory of shared hardware (switches, cables, etc.) and conducting meetings (yep…). Don’t get the wrong idea, I didn’t do all the stuff. Every event was a group effort but I somehow ended up coordinating the whirlwind of activity and fun a LAN party is. It was quite an experience even if (sadly) often a stressful one. At first I tried to stick to plans but soon realized that planning doesn’t work for humans that want to have fun. So it soon turned into walking a tightrope between planning and improvisation. For example we had to ban excessive drinking but still tolerated it when it didn’t hurt (and figuring out when it’s what is a difficult decision). Also people wanted to play big matches and tournaments but everyone had different preferences and wanted to play different games. Getting everyone together, talking to them to get them into the mood (or at least to participate for others sake) wasn’t easy. Even getting everyone to install a game in the correct version and working state before a major match or tournament was no small social and technological challenge.

I think we did pretty well. Especially since those were rocky years for some of us. Many companies I’ve seen or worked at since then where way more chaotic and unfocused. As people moved out of the region either to work or study the activity ebbed away. Today it’s just a forum with some occasional post here and there. But still a time I wouldn’t want to miss.

Callsystem web-based helpdesk and issue tracking software

Status: finished Tags: webdesign, html, css, coldfusion

One of my first commercial projects that wasn’t a simple website. The company I was doing my apprenticeship at needed something to coordinate its technicians and I ended up writing it. And learning ColdFusion on the way.

Over the 2½ years of maintenance and occasional development it grew into a fully-fledged issue tracker (comments, attachments, etc.), got automatic issue pooling and assignment, statistics and some project management stuff. I really enjoyed working with the technicians because you could easily understand their problems and actually solve them. From todays perspective all pretty rudimentary but it delivered what was needed.

DM-Race and DM-SpaceDome UT2003 and UT2004 level design

Status: abandoned Tags: leveldesign, gameplay

Besides programming technical drawing and CAD stuff was by second big hobby back then. So no surprise that I spend quite a lot of time fiddling around with WordCraft and UnrealED. After all paper or a CAD program can’t give you the satisfaction of walking around in the world you created.

I did some small and large maps to test out both of them. The most ambitious was reconstructing several buildings from plans. Together with a friend we got some plans, he built it in UnrealED 2 (UT99) and I in WordCraft (CounterStrike). After several weeks I rotated an entire building and the Half-Life engine finally broke down completely (8 hours or longer visibility calculations among other things). Needless to say that this stopped this particular project for me and from that day on I only worked in UnrealED 2.

When UT2003 was released we noticed a strange asset: A car with a rocket launcher. You could place it and drive around with it. So out of fun I created a racing map for one of our many LAN parties during that time. People seem to like it so I started out to create a more complex map. This time it was to be an asteroid base orbiting a ringed planet. Complete with a garden dome, hangar and a docked space ship.

The engine must have hated me for it. I think I touched most features UT2003 and later UT2004 provided and played around with them. More detailed terrain, creative physics collisions, static meshes (they were new back then), particle systems, fluid surfaces, materials and so on. I learned quite a lot. But while it was a 1½ year journey of technical discovery I lacked in the gameplay department. The more I read about it later in the project the more I realized how badly designed the map was. I also started to learn Maya to build static meshes but this took extreme amounts of time.

Realizing that seeing the project through would need way more time than I had I decided to stop working on it. Turned out that this was the right decision for me. With later games the level of detail and effort to create maps exploded and I’m glad I spend that time programming. Never the less I had a fun time and learned a lot.

Pictures

Books school library management program

Status: finished Tags: qbasic, dos

A small little happy DOS program to manage book lending at the school library. A friend maintaining the school library asked me to write it but it had to be finished within one week because of a deadline. The library only had a 10 MHz i386 computer with 1 MiByte RAM running DOS. So that was the target platform.

My brother provided me with a similar system so I could program in realistic conditions. We yanked out all but one RAM module and flicked of the turbo switch. Even back then he had a few PCs to spare…

So I took QBASIC and went off to write the thing. Creating the GUI and database from scratch was an interesting experience. The database was an HTML like text file. Because you code what you know, right? And the GUI was mostly drawn with ASCII symbols but buttons, text boxes and keyboard navigation was way simpler than expected. Each form was just a simple array of structures with an index to denote the focused element.

Memory was always a concern since you only had about 700 KiByte available in DOS. The QBASIC interpreter needed about 350 KiByte of that so you haven’t had much to work with. But turns out you can still do a lot with it and it was more than enough.

Not that I really understood it back then. I didn't even know malloc() or the details of memory management. Boy, what a revelation malloc() was a few years later. All I had was the documentation in the help of QBASIC (no Internet yet, remember?). But I learned that just the combination of arrays and structs can be incredibly powerful. You can implement a lot of stuff without much code that way. After a few days I somehow discovered QuickBASIC and it had a much deeper documentation to learn from (QBASIC is a stripped down QuickBASIC). It could also compile the program into a binary.

All in all the program wasn't terribly efficient. It could only handle a few hundred books because it had to keep everything in memory. But I finished it on time. It was fast, easy and quick to use and did the job. They used it for several years and only reluctantly replaced it when the PC broke down and they were given a Windows PC. Unfortunately it wasn't usable in the DOS prompt (which was a pretty bad DOS emulator back then) so they had to use a program that would run on Windows. It now runs excellent in DOSBox though. :)

Pictures

DevTipps personal homepage

Status: finished Tags: webdesign, html, css, javascript

My first personal homepage to go online. Back in the time where design meant using colors that didn't burn your eyes out. I used it mostly to publish some of the programs I’ve written and to collect quotes.

At first I also published some information about Windows HTT files and how you could edit them and use special directory names to circumvent access restrictions. Later on I removed them because I thought “Who would be interested in that?”. Who would be interested in security? What a naive thought back then.

It’s also interesting that the website still works 18 years later. Turns out HTML, simple CSS and framesets aged pretty well. On older projects (e.g. school website) I had to create Netscape and IE specific variants and the Netscape stuff doesn’t work at all now. Those were the days of the first browser wars…

Pictures

Installation and maintenance of school PC rooms

Status: finished Tags: administration, windows

While I was at school the PC exercise rooms where in a pretty sorry state. The hardware and software didn't work properly and that caused problems during exercises. Especially since back then most teachers weren't accustomed to computers.

Thanks to a few very nice teachers two friends and I were allowed to properly setup several PC exercise rooms (about 2×15 PCs). We installed Windows 95 on all of them and maintained them for about 2 years. Since some of the rooms used BNC cables for networking this sometimes meant chasing down the BNC terminators…

All in all a quite interesting experience. Especially as a young pupil where you’re not used see the other side of the table: The people that try to teach and keep the school running.

The End. Or the beginning. Depends on how you look at it. Anyway, if you arrived here be proud of your reading stamina (or scrolling skill) and have a cookie. Thanks for reading. :)