Category Archives: Programming

Effective OpenAL with LWJGL 3

Effective OpenAL with LWJGL 3

Jesus Bloody Christ it’s been a while.

So, a lot of you are likely interested in developing on Java with LWJGL3 instead of LWJGL 2.9.*; as you should be. LWJGL3 has support for a lot of modern industry trends with older versions did not; such as multi-monitor support without back flipping through flaming hoops, or basically anything involving GLFW. It’s still in beta, I know, but it’s a solid piece of work and the team on it is dedicated enough to make it a reliable and standing dependency for modern projects.

Except for every now and then, when it happens to be missing some minor things. Or, more importantly, when there’s a dearth of documentation or tutorials on a new trick you’re pulling.

I can contribute, at least in part, to both of those.

OpenAL is the audio world’s equivalent to OpenGL; it’s a sophisticated and sleek interface to sound hardware. Many common effects and utilities, such as 3D sound, are built into it directly; and it interfaces sublimely with code already designed for OpenGL. Additionally, it’s also a very tight interface that does not take long at all to learn.

In the past, I would suggest using JavaSound for Java game audio, which is also a tight API, but it lacks these features. Most major audio filters have to be built into it rather directly and often by your own hand; and there’s no official guarantee of hardware optimization. However, what LWJGL3’s OpenAL interface now lacks can easily be supported by readily-present JavaSound features; such as the audio system’s file loader.

This entry is on, step by step, how one would do such a thing.

Let’s start with a basic framework. I’ve tried to keep a balance between minimal dependencies and staying on-topic, so I’ll suggest that you have both LWJGL3 (most recent version, preferably), and Apache Commons IO, as dependency libraries.

class Lesson {
    public Lesson() throws Exception {
        //Start by acquiring the default device
        long device = ALC10.alcOpenDevice((ByteBuffer)null);

        //Create a handle for the device capabilities, as well.
        ALCCapabilities deviceCaps = ALC.createCapabilities(device);
        // Create context (often already present, but here, necessary)
        IntBuffer contextAttribList = BufferUtils.createIntBuffer(16);

        // Note the manner in which parameters are provided to OpenAL...


        // Don't worry about this for now; deals with effects count

        //create the context with the provided attributes
        long newContext = ALC10.alcCreateContext(device, contextAttribList);
        if(!ALC10.alcMakeContextCurrent(newContext)) {
            throw new Exception("Failed to make context current");
        //define listener
        AL10.alListener3f(AL10.AL_VELOCITY, 0f, 0f, 0f);
        AL10.alListener3f(AL10.AL_ORIENTATION, 0f, 0f, -1f);
        IntBuffer buffer = BufferUtils.createIntBuffer(1);
        //We'll get to this next!
        long time = createBufferData(buffer.get(0));
        //Define a source
        int source = AL10.alGenSources();
        AL10.alSourcei(source, AL10.AL_BUFFER, buffer.get(0));
        AL10.alSource3f(source, AL10.AL_POSITION, 0f, 0f, 0f);
        AL10.alSource3f(source, AL10.AL_VELOCITY, 0f, 0f, 0f);
        //fun stuff
        AL10.alSourcef(source, AL10.AL_PITCH, 1);
        AL10.alSourcef(source, AL10.AL_GAIN, 1f);
        AL10.alSourcei(source, AL10.AL_LOOPING, AL10.AL_FALSE);
        //Trigger the source to play its sound
        try {
            Thread.sleep(time); //Wait for the sound to finish
        } catch(InterruptedException ex) {}
        AL10.alSourceStop(source); //Demand that the sound stop
        //and finally, clean up



The beginning is not unlike the creation of an OpenGL interface; you need to define an OpenAL context and make it current for the thread. Passing a null byte buffer to alcOpenDevice will provide you with the default device, which is usually what you’re after. (It is actually possible to interface with, say, multiple sets of speakers selectively, or the headphones instead of the speaker system, if you would like; but that’s another topic.)

Much like graphics devices, every audio device has its own set of capabilities. We’ll want a handle on those, as well. It’s safe to say that if a speaker can do it, OpenAL is capable of it; but not all speakers (or microphones) are created the same.

After this, OpenAL will want to know something of what we’re expecting it to manage. Note that it’s all passed over as a solid int buffer. We’re providing it with a notion of what features it will need to enact, or at least emulate; with a sequence of identifiers followed by parameters, terminated with a null. I haven’t begun to touch all that is possible here, but this attribute list should be enough for most uses.

After that, create the context, make it current, check to see that it didn’t blow up in your face, and register the capabilities. (Feel free to play with this once you’ve got the initial example going.)

So, before I get to the part where JavaSound comes in, let’s start with the nature of how OpenAL views sound. Sound, in its view, has three components: a listener, a source, and an actual buffer.

The listener would be either you or your program user; however, the program would want to know a little about your properties. Are you located something to the left or right? Are you moving (or virtually moving)? I usually set this first as it is likely to be constant across all sounds (kind of like a graphics context).

Next, we have a method of my own creation that builds and registers the audio file. Forgive me for the delay, but that’s where JavaSound’s features (in the core JKD) come in, and I’m deferring it to later in the discussion. You will note that the audio buffers have to be registered with OpenAL; as it needs to prepare for the data. There’s a solid chance that you will have sound-processor-local memory, much like graphics memory, and it will have to be managed accordingly by that processor before you can chuck any data at it.

Let’s look at that audio buffer creator.

     private long createBufferData(int p) throws UnsupportedAudioFileException, IOException {
        //shortcut finals:
        final int MONO = 1, STEREO = 2;
        AudioInputStream stream = null;
        stream = AudioSystem.getAudioInputStream(Lesson3.class.getResource("I Can Change — LCD Soundsystem.wav"));
        AudioFormat format = stream.getFormat();
        if(format.isBigEndian()) throw new UnsupportedAudioFileException("Can't handle Big Endian formats yet");
        //load stream into byte buffer
        int openALFormat = -1;
        switch(format.getChannels()) {
            case MONO:
                switch(format.getSampleSizeInBits()) {
                    case 8:
                        openALFormat = AL10.AL_FORMAT_MONO8;
                    case 16:
                        openALFormat = AL10.AL_FORMAT_MONO16;
            case STEREO:
                switch(format.getSampleSizeInBits()) {
                    case 8:
                        openALFormat = AL10.AL_FORMAT_STEREO8;
                    case 16:
                        openALFormat = AL10.AL_FORMAT_STEREO16;
        //load data into a byte buffer
        //I've elected to use IOUtils from Apache Commons here, but the core
        //notion is to load the entire stream into the byte array--you can
        //do this however you would like.
        byte[] b = IOUtils.toByteArray(stream);
        ByteBuffer data = BufferUtils.createByteBuffer(b.length).put(b);
        //load audio data into appropriate system space....
        AL10.alBufferData(p, openALFormat, data, (int)format.getSampleRate());
        //and return the rough notion of length for the audio stream!
        return (long)(1000f * stream.getFrameLength() / format.getFrameRate());

We’re hijacking a lot of the older JavaSound API utilities for this. OpenAL, much like OpenGL, isn’t really “open”, nor is it technically a “library”. So, having something around for handling audio data is helpful, and why bother writing our own when it’s already built into the JDK?

For JavaSound, you work with either Clips, or (more frequently) AudioInputStreams. You can read most audio file formats directly via AudioSystem.getAudioInputStream(…); in this case, I’ve elected to use a WAV format of LCD Soundsystem’s “I Can Change”, because James Murphy is a god damned genius. However, you can use anything you would like; to get it to work with this just drop it in the same source directory.

Next up, grab the format of the sound with AudioStream.getFormat(). This will provide you with a lot of valuable information about the stream. If it’s a big endian stream (which most wave files are not), you might need to convert it to little endian or make proper alterations to OpenAL. I’ve glossed over this, as endian-ness is not really a part of the tutorial and there are plenty of good byte-management tutorials out there.

I’ve elected to use format to check for the mono/stereo status (more are actually possible), and whether the sound is 8-bit or more frequently 16-bit. (Technically 32- or even 64- bit sound is possible; but there is actually a resolution to the cochlea of the ear, and you’re not going to bump into that outside of labs with very funny looking equipment. Even Blu-ray doesn’t go above 24-bit. Seriously, there’s generally just no point in bothering.)

Afterward, we load the stream into a byte array (I’m using IOUtils for this for brevity, but you can do it however you like), and the byte array into a ByteBuffer. Flip the buffer, and punch it over to OpenAL, which will take care of the rest of the work with it. Afterwards, we will eventually need the length of the audio stream, so calculate it as shown and send it back to the calling method.

After the buffer’s been created and the length of it is known, we’ve got to create a source for it! This is where most of the cooler built-in effects show up. alGenSources() creates a framework for the source; alSourcei(source, AL10.AL_BUFFER, buffer.get(0)) ties it to the sound buffer. You’ll also see that I set up AL_GAIN and AL_PITCH, which are fun to play with.

You’re almost done!

To actually play the buffer, you use the source. alSourcePlay(source) starts it. After that, I have the Thread sleep for the calculated length of the sound, just so we have time to hear it. At the end, I call alSourceStop(source) to demand an end to the source.

Lastly, I delete all sources. You might also want to delete devices, if you’ve done anything silly with them; this is very low-level access. You now have everything you need to load audio into your games and programs, and if you happen to bump into an SPI for a preferred format, it will now also be enough to get you going on OpenAL as well.

Leave a comment

Posted by on July 4, 2016 in Java, Programming


Tags: , , , , , , , , , ,

Apache Commons DecompositionSolvers

Apache Commons DecompositionSolvers

Jesus it’s been too long since I got back to this!

Anyway, right, the


Intro / Why-I-Need-To-Worry-About-This

Linear algebra exists for a reason; namely, us. Suppose we’re attempting to find the coordinate values, which under a certain transform, become a specific value. Let’s keep it simple and call it:

 x + 2y + 3z = 1
2x + 4y + 3z = 2
4x           = 3

As I’m sure you can remember from grade school, you have the same number of equations as unknowns, so it is almost certainly solvable. We just subtract two of the first equation from the second, four of the first equation form the third, four of the second from the third, one of the second from the first, and a quarter of the third from the first. Then we maybe divide the third by eight and the second by three, and presto,

x = 3/4
y = 1/8
z = 0

Unfortunately, as programmers, we both know that this is much easier done in practice than in theory; and when you’re automating  a task, a working theory is the only thing that really counts.

So, those of you who have already taken linear algebra (quite possibly all of you) may be familiar with a much easier way of representing this problem:

┌1 2 3┐┌x┐   ┌1┐
│2 4 3││y│ = │2│
└4 0 0┘└z┘   └3┘

A decomposition basically solves this, through a sequence of steps on both sides that reduces the original matrix to an identity matrix, while having the right-hand matrix undergo the same operations. This is commonly written as an augmented matrix, like so:

┌1 2 3│1┐
│2 4 3│2│
└4 0 0│3┘

Matrix reduction is a heck of a lot more straightforward than the nonsense I spouted a few paragraphs back, though going into its details is a bit off topic here. Our final matrix, after the reduction, looks like this:

┌1 0 0│3/4┐
│0 1 0│1/8│
└0 0 1│ 0 ┘

How Do We Do This in Java?

Not just Java, actually; this is specifically about the Apache Commons Math3 decomposition solver interface.

One of the tricks with reduction is that there are a lot of different, equally effective, ways to go about it; and like any other algorithm, the efficiency depends, in large part, on the initial state of your matrix. My personal favorite is the LU Decomposition. (Or, if you prefer a link that isn’t a video, look here.)

First I recommend making a Maven project out of your Java project, presuming that it isn’t already fitting that form factor. Afterwards, open up pom.xml, and add this:


right after the close of the build tag. Your project is now pulling classes from across the internet, on Apache Commons Math3. Later on, you may want the version number to be a bit higher; for now I’m using version 3.5.

So, you’ll note that you have access to a host of new classes, all in some subpackage of org.apache.commons.math3. Import org.apache.commons.math3.linear.* into your class file.

We can solve the above problem by creating a RealMatrix of the initial matrix, potentially like so:

RealMatrix matrix = new Array2DRowRealMatrix(new double[][]{
    {1.0, 2.0, 3.0},
    {2.0, 4.0, 3.0},
    {4.0, 0.0, 0.0}

But don’t get me wrong, there are literally dozens of ways to create a RealMatrix.

Next, create a RealVector, describing the other side of the equation, perhaps like so:

RealVector vector = new ArrayRealVector(new double[]{

We now have a matrix and vector representation of the two sides of our equation.

Working with RealMatrix and RealVector

If you’re an experienced programmer, you probably expect some kind of Command Pattern to show up next. It’s certainly what I would do, if I needed to duplicated the exact operations in the exact order on more than one piece of base data. Fortunately, something like it has already been implemented by Apache.

If you look up the Apache Commons Math3 javadocs, you’ll notice that while RealMatrix has a lot of handy operations, they generally just involve polling for data, not actually operating on it. Commons has made the wise move to encapsulate operations in their own classes, rather than just their own methods. There are many dozen other classes, such as MatrixUtils (remember that one!), which both generate and operate on RealMatrix and RealVector classes.

In this instance, turn to DecompositionSolver. It’s meant for tasks just like our own, and there are many subclasses. As I said, my preference is LUDecomposition, but that is only capable of handling square matrices. Since our matrix is square, that’s fine; in other cases when your matrix doesn’t fit the profile, look through EigenDecomposition, SingularValueDecomposition, or some other utility.

For LUDecomposition, we’ll want to do something like this:

DecompositionSolver solver = new LUDecomposition(matrix).getSolver();

The work has been done, as one initialization, LUDecomposition doesn’t just store the matrix as a property; it determines from it the exact sequence of operations necessary to turn it into an identity matrix.

Once you have your solver, you can get your final right-hand vector via:


which will provide you with:

└ 0 ┘

Final Source Code

Here’s a working example of how such a program might work.

 package oberlin.math3;

import java.util.*;

import org.apache.commons.math3.linear.*;

public class MatrixReducer {
    public static void main(String...args) {
        new MatrixReducer();
    public MatrixReducer() {
        try(BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(System.out));
                Scanner scanner = new Scanner( {
            writer.write("\nEnter first row of three numbers: ");
            RealVector vector1 = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});
            writer.write("\nEnter second row of three numbers: ");
            RealVector vector2 = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});

            writer.write("\nEnter third row of three numbers: ");
            RealVector vector3 = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});
            //create matrix
            RealMatrix matrix = MatrixUtils.createRealIdentityMatrix(3);
            matrix.setRowVector(0, vector1);
            matrix.setRowVector(1, vector2);
            matrix.setRowVector(2, vector3);
            //get other side
            writer.write("\nEnter vector on right side (3 entries):");
            RealVector vector = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});
            DecompositionSolver solver = new LUDecomposition(matrix).getSolver();
            matrix = solver.solve(matrix);
            vector = solver.solve(vector);
            writer.write("Solution: \n");
            writer.write("┌" + matrix.getEntry(0, 0) + " " + matrix.getEntry(0, 1) + " "
                    + matrix.getEntry(0, 2) + "┐┌x┐   ┌" + Double.toString(vector.getEntry(0)) + "┐\n");
            writer.write("│" + matrix.getEntry(1, 0) + " " + matrix.getEntry(1, 1) + " "
                    + matrix.getEntry(1, 2) + "││y│ = │" + Double.toString(vector.getEntry(1)) + "│\n");
            writer.write("└" + matrix.getEntry(2, 0) + " " + matrix.getEntry(2, 1) + " "
                    + matrix.getEntry(2, 2) + "┘└z┘   └" + Double.toString(vector.getEntry(2)) + "┘\n");
        } catch (IOException e) {
Leave a comment

Posted by on June 30, 2015 in Java, Programming


Tags: , , , , , , , , , ,

The NIO.2 Watcher

So, I’ve been working on a side project involving the Builder tutorial. It roughly (not entirely, but roughly) works out as a machine-operated interpreter, that is, code altered by machine before being translated. After that it does something even more awesome, but it’s only capable of triggering the compilation, after alteration, through a utility that isn’t as well known as it should be.

The Watcher Utility

As of Java 7, we got the NIO.2 classes. These included Path (which most of you are probably familiar with), Files, FileSystem, asynchronous channels, and a host of other goodies. One of them was the Watch Service API.

What Watch ultimately amounts to is a device that can trigger an event any time an arbitrary subset of data is altered in some way. The easiest possible example is monitoring a directory for changes, but this is, gloriously, not exclusive. In classic Java nomenclature, one might think of it as a sort of PathEventListener, in a way; but it’s capable of a bit more than that particular name implies. It doesn’t have to be associated with Paths, and unlike most listeners, it’s less about monitoring for user-generated interrupts, and more about monitoring for system-wide circumstances, including secondary effects.

Using a Watcher

Watchers keep internal collections of keys, each one associated with a source object. This registration is typically located on the source object, at least directly. The best, and most correct, way to do this is through direct implementation of the Watchable interface. Many JDK classes, such as Path, already implement this. Once implemented, you would use the method:

Watchable.register(WatchService watchService, Kind<?>

This method registers all events, of the specified types, on the provided WatchService object. Every time one of them occurs, the key is flagged as signaled, and in its own time the WatchService will retrieve the data from that key and operate.

Note that a Path can be many things. It could be a path to a directory on your machine, which is of program concern. It could be a path to a printer tray, or a server, or even a kitchen appliance (think polling the status of an automated espresso machine). In this example, I will be showing a manner in which a directory path can register to be watched for alterations.

WatchEvent.Kind interface

This can be thought of, for old school Java programmers, as the class type of a Watch event. Most of the frequently used keys are in java.nio.file.StandardWatchEventKinds, but as an interface, it is fully customizable. They only require two methods to be overridden, that is,, which simply returns a String value representing the type of event; and WatchEvent.Kind.type(), which returns a Class that describes the location context of the event.

WatchEvent.Kind.type() may return null, and it won’t break anything; but after getting a feel for the results of StandardWatchEventKinds, you might consider implementing it. As an example, for ENTRY_CREATE, ENTRY_MODIFY, and ENTRY_DELETE, the context is a relative path between the Path being watched, and the item that has changed. (Knowing that a random item was deleted is of little if any use, without knowing which one.)

Implementing a WatchService

Most of the WatchServices you are likely to use are stock in the JDK. I’m going to start with one of them; in a later blog, I’ll probably create one from scratch, but it really is better to start simple.

For the common case of monitoring a directory, FileSystem.newWatchService() covers everything you need. It is important to get a watcher for the correct type of FileSystem, though; as many of you know, Java is capable, as of version 7, of taking advantage of the numerous file system-specific capabilities. The safest way to do it is through:

WatchService watcher = FileSystem.getDefault().newWatchService();

But there may be many points in which you intend to grab a watcher from a file system of a specific, or even custom, type. This is fine, but be aware of the extra layer of debugging.

Afterward, each path can be registered with the watch service through its Path.register(…) method. Be certain to include every variety of WatchEvent.Kind that you want to watch for. It may be tempting to simply register for every single standard type every time, but I encourage you, as a matter of practice, to consider whether you’re really concerned about each Kind before including it. They do, technically, cost a small amount of system resources; and while it may not be noticeable for small projects, when you’re dealing with massive file hierarchies it can become a concern.

When polling for changes, it is mildly more complicated than it is with Listeners. The watcher must be polled for WatchKey objects. WatchKeys are generated when a watched alteration occurs. They all have a state, which is continuously either ready, meaning the associated Watchable is valid but without events; signaled, meaning that at least one event has occurred and been registered with this WatchKey; and invalid, meaning that it is no longer sensible to consider the associated Watchable a candidate for events.

There’s more than one way to get the next signaled WatchKey, but one of the most efficient methods is WatchService.take(). This will always return a signaled WatchKey. It is a blocking method, so use it with that in mind; if no WatchKeys are yet signaled, it will wait until one is before returning.

Once you have a WatchKey, a secondary loop examines every sequential change that has occurred. (If you’re curious, if a WatchEvent occurs for a WatchKey that is already signaled, it is added to the stack and no other alterations are made; if it occurs while the WatchKey is ready, it initiates the stack and WatchKey is flipped to signaled). This is done via WatchKey.pollEvents(). For each event, you may examine the WatchEvent, and act on it accordingly.

After all is said and done, and the WatchKey has zero events left to parse, call WatchKey.reset(). This attempts to flip the WatchKey back to the ready state; if it fails (if the key is now invalid), the method returns false. This might signal, as an example, that the watched path no longer exists.


Any WachService manager must be running continuously. The antipattern approach is to simply use a while-true block; but in general, it is less hazardous to make it its own thread.

import java.nio.*;
import java.nio.file.*;
import java.nio.file.WatchEvent.Kind;

public class DirectoryWatcher implements Runnable {
    private WatchService watcher;
    private volatile boolean isRunning = false;
    public DirectoryWatcher() throws IOException {
        watcher = FileSystems.getDefault().newWatchService();
     * Begins watching provided path for changes.
     * @param path
     * @throws IOException 
    public void register(Path path) throws IOException {
        //register the provided path with the watch service
        path.register(watcher,    StandardWatchEventKinds.ENTRY_CREATE,

    public void run() {
        isRunning = true;
        while(isRunning) {
            //retrieve the next WatchKey
            try {
                WatchKey key = watcher.take();
                key.pollEvents().stream().forEach(event -> {
                    final Kind<?> kind = event.kind();
                    if(kind != StandardWatchEventKinds.OVERFLOW) {
                        final Path path = ((WatchEvent<Path>)event).context();
                        System.out.println(kind + " event occurred on '" + path + "'");
                if(!key.reset()) {
                    //the key should be valid now; but if it is not,
                    //then the directory was likely deleted.
            } catch (InterruptedException e) {
    public void stop() {
        this.isRunning = false;

Simple enough, yes?

The register(…) method may be a little redundant; however, the run() method is where the meat is. WatchKeys are retrieved with WatchService.take(); afterward, in a parallel stream, each WatchEvent associated with that key is looped through. (When an event is of type OVERFLOW, it usually means that data on the event has been lost; not optimal, but the best course of action here is to continue to the next key.)

In this instance, the event is simply reported to the terminal, but this lambda expression is where you would take arbitrary actions according to the event. It is also possible to use an external iteration to do this, if you need to change values or perform another non-lambda-kosher action.

After all events have been iterated through, WatchKey.reset() is called, and checked. In the event that it returns false, something has happened to our directory, and the thread has become a potential resource leak; so it is shut down automatically. Otherwise, the thread then yields to other threads, and repeats itself.

Here’s a small Main class that I’ve built to use this. A single path parameter will be the directory to monitor; or it will simply watch for $HOME/watchtest.

import java.nio.file.Path;
import java.nio.file.Paths;

public class Main {

    public static void main(String[] args) throws IOException {
        final String location = (args.length > 0) ? args[1] : 
            System.getProperty("user.home") + "/watchtest";
        final Path path = Paths.get(location);
        DirectoryWatcher watcher = new DirectoryWatcher();
        (new Thread(watcher)).start();
        //wait a set amount of time, then stop the program
        try {
        } catch (InterruptedException e) {


Try running it, and making a few changes to your select directory. See what it does.

And That’s It!

The next real question is how to create your own WatchService; which is totally doable. Generally, though, it isn’t necessary. The next time I come back to this subject, I’ll be going over that, possibly starting with WatchKey.Kinds. First, though, I need to get back to the project that I started this for, and I need to continue the Build Tool tutorial, so it might be a bit.

Good coding!

Leave a comment

Posted by on March 11, 2015 in Java, NIO.2, Programming


Tags: , , , , , , , , , , , , , , , , ,

So What the Hell Happened!?

Jesus Christ I need to get back to this blog.

Okay, long, in-depth story which will hopefully be funnier to read than it was to live through. Happy ending, I promise.

My wonderful lover has opted to buy us both new computers; starting with barebones kits. That’s not as a big a deal to a physicist, and possibly also witch apparently. I just upgraded from something that ran well because I oiled it and cleaned it every single day (metaphorically) to something that would probably run well if I poured orange juice on it. However, first there was its construction.

Usually, I deal with latent static charge by simply contacting the metal chassis and grounding myself. In this case, given the exposure time, it seemed wise to go ahead and get one of the anti-static grounding straps for my wrist.

As a man who has built several complete devices transistor by transistor, including my share of audio equipment, this didn’t feel like it would be a big deal. The only issue with building a computer is that you have to subject yourself to the whims of the hardware designers, and operating system designers, when those whims may have nothing to do with what you’re accustomed to and, occasionally, don’t even make any sense. It went more like this. I’ll be brief.

1. Find metal hex nuts for case, install them. Line up motherboard, attach screws. Notice that you didn’t line it up right, and it isn’t grounded. Remove screws. Remove and move hex nuts. Line up properly. Attach screws. Notice that you once again have it out of alignment, detach screws, board, and nuts. Line everything up, double check each of the available screw holes. Screw it in, one by one. Properly attach motherboard, except for the fact that you were supposed to do the backplate first. Ignore freaking backplate.

2. Line up the processor properly, check. Double check that you’ve lined up the processor properly, check. Ease processor into socket. Notice that it went in a lot easier than last time, and pray to the computer building gods (who would that be? Brigid?) that you didn’t just break anything. Seal processor down with lever. Struggle to attach cooling unit. Struggle more. Attach argumentative cooling unit.

2 ½. Look around, try and find the fan plug. Not find the fan plug. Find something that looks like it might do the job, even though it has an extra pin. Connect. Double check that that actually makes any sense. Does make sense, great. Pop in memory. Attach faceplate cables. Completely overlook the faceplate fan.

3. Attach power supply to case. Attach power cables to motherboard, one by one. Wrap a motherload of electrical tape around the non-modular power supply’s dangling cables. Keep graphics card on standby. Spend an hour trying to figure out where the SSD is even supposed to go in this tower case. Finally find out, on account of girlfriend’s keen eye, install it. Notice that you just installed it backwards. Slide it out, install it again. Pop SATA 3 cables on.

~4. Plug in keyboard, mouse, monitor, turn it on, get to BIOS. Pat self on back. Select “optimize”, because what harm could that do? (Hint: Quite a bit, as it turns out.) On a whim, select option to search for hidden processor cores. Save changes. Restart. Notice that nothing is happening, save for the blue LED lamp, and you can’t even get to BIOS.

4. Punch self in kidney.

Well, to be fair, I slid my Hitachi 2TB from my old computer in at that point too. I knew I could boot. This mistake was actually made after I went back into BIOS settings, just to ensure that everything was in order, without doing nearly enough research.

3 again. Take deep, deep breath. Count to ten. Look up BIOS, try and find reset jumper; fail. With girlfriend’s assistance, because you know they don’t make them to come out easy, remove all power cables and pop the CMOS battery out. Push power button for thirty full seconds, drain all capacitors completely. Pop battery back in. (This is what you do when you can’t even find the jumper; it’s just as effective. Just don’t do it without good cause.) Pray to Morrigan, Celtic goddess of battle and change, for wisdom. Turn computer on, notice that it’s nice and alive again.

4. Never, ever, ever do that again.

At this point I could boot to my 2 TB easily enough. Unfortunately, while Ubuntu immediately noticed all the awesome new hardware, thanked me for it in its own Unixy way, and went to town like a child at a ball pit; Windows did something quite different. Windows is keyed not to users, but to motherboards. When it noticed that it was on the wrong motherboard, it immediately assumed that it was pirated, and rather than presenting me with some eloquent message about how I’m a pirate asshole (which I am not) and refusing to run, it flashed a blue screen covered in gibberish UTF-8,  and restarted, before I ever got to the desktop.

As it turns out, this OEM software was licensed entirely to my last machine. I did not realize that, but I was thoroughly pissed off at the underhandedness of a deal like that. In any case, whilst there may be a way to confuse Windows into thinking it’s on the same machine; I am not a pirate, and I needed to get a non-preinstalled copy of Windows 7. That cost us about a third of what the machines did.

(For those who are curious, I did find the secondary fan socket and plug in the faceplate fan later.)

Once it arrived, I plugged it onto the 120 GB SATA 3 SSD. It took quite a while to install, I unplugged every other drive first just to be safe, and afterward, I could successfully boot to WIndows 7. (I refuse to buy Windows 8.) The problem was, now I couldn’t get back over to Linux, where all of my work was. GRUB would not load. It hadn’t changed, but the system wouldn’t see it.

Next up, I tried boot-repair, but it kept telling me, even when running on its own LiveUSB, that I was running a program like Synaptic in the background. That could not make less sense. So, I cut out an even slice at the end of the SSD, of about 17 GB, and popped a new root partition of Ubuntu on there. The install worked flawlessly; but the machine still couldn’t see it.

So, I spent quite a while bouncing from forum to forum, trying to figure out why my BIOS was not detecting GRUB. I reinstalled it on the HDD to be safe, and removed my SSD partition. (Ubuntu installs in maybe twenty minutes anyway.) Eventually, I discovered that the culprit might be this Infinity 2.2 TB feature, a feature that allows you to boot from drives that are larger than 2.2 TB. That’s a progressive feature, but I don’t actually own one of those and I’m not likely to pick a 4 TB up any time soon. When I disabled it, GRUB popped up.

However, GRUB couldn’t find Windows. Minor loss as an engineer, but I will ultimately need it for unit-testing and I have a lot of games written exclusively for Windows that I would like to play. I found myself at an impasse; GRUB didn’t see Windows, and Infinity only saw Windows. I would still like it if I could find a way to get GRUB to recognize Windows; but I messed with the partition flags a bit when I was trying to get SSD GRUB to boot and I imagine that the problem might be there.

So, at this point, on the occasion that I do want Windows, I simply hit F11 on boot, select 2.2 TB Infinity as my boot device, punch a key to select a disk, and pick the SSD. The funny thing is, even after that, and even with Windows on the SATA 3 SSD, Ubuntu still boots faster; off of a SATA 2, mind you.

So, I’m back in business. I haven’t forgotten about the builder tutorial at all, but I have been lightly sidetracked with another project involving easing the interface between Java and GLSL. (The builder so far has, in fact, been quite useful for that.) I’ve also noticed a number of chunks in my builder tutorial that could be optimized by migrating them fully to NIO/NIO-2; as there aren’t any old packages holding me back. (NIO is blisteringly faster.)

Add constructing the other machine afterward and you can seen why I’ve been away. I have suspended nothing on this blog, and it’s good to be back to it. We’re just about to get to the fun part in the builder tutorial, and I’m looking forward to it. (Especially on this beast.)

Leave a comment

Posted by on February 23, 2015 in Programming, State of the Moment


Tags: , , , , , , , , ,

A Case Against Using Null. (For Almost Anything.)

Java is my usual language, but this goes for everything.

I promise that this is not going to be another rant about NullPointerExceptions and their kin in other languages. This is not to say that such rants are not warranted, and even cheered; but I’m going to be a bit more academic about it. I’m also going to provide solutions, not only the ones available in Java 8, but what I used to do beforehand.

What Does “Null” Actually Mean?

Great question. Null as an adjective, according to the dictionary, means without value, effect, or significance. It also means, lacking, and nonexistent. It also means, empty. And, lastly, also probably most recently, it means zero. This is most likely a linguistic artifact, as everything is ultimately expressed in the binary on a computer. In C, null actually does equate to zero. However, this necessity has led all of us to a lot of abuse, because symbolically it isn’t what null is for. I’ll come back to that.

Null’s etymological origin comes from the latin nullus, meaning none, as in, it-has-not-been-set. While zero is reasonable, zero is an actual number. If you were enumerating the entries of a set of numbers, and you wanted to count the length of that set, you would not skip every entry that was zero, would you? However, you likely would for null, as it denotes no-entry in the set. Therein lies the critical difference.

In Java, objects are initialized to null, before they are set to any value. Object instances are the Java equivalent of C’s pointers; and while they cannot be without a value, they initialize to a language constant that reflects the absence of an intended one. Null is typically represented as zero, but not always, and I am unsure of the case with Java. However, a null pointer is a symbol, it is not reasonably a pointer to the zero position in memory. This position does, actually, exist; but on the reference of such a pointer, the virtual machine (or platform) throws up a red flag.

The nasty habit of using null as a return value when something goes wrong in an operation is almost ubiquitous, but unless this literally represents that no value has been set, it is a dangerous move. I’ve even seen it in the JDK. The response to such an ill-though-out method is usually a few lines of defensive programming, checking to see whether the object is null, and acting accordingly.

Java 8 Solutions

If you aren’t a Java programmer, you may wish to skip this section.

As it turns out, the defensive programming response to returned nulls is so similar, in every instance, that it can be encapsulated into an object itself. This would be Java 8’s Optional. Optional represents a possible value, that is, a value which cannot be guaranteed to exist. However, the Optional itself is never null.

On initialization of an Optional, it is best to set it to Optional.empty(), that is, an Optional with no contents. If a value is being wrapped in an Optional, use Optional.of(). If the presence of the value is unknown, use Optional.ofNullable(), it will do the defensive work for you. The rest of the methods of Optional apply Java 8’s influences from functional programming. What used to require complex if statements is now done primarily through ifPresent(…) and orElse(…).

This might seem like an overreaction to you. However, compared to the work that I used to have to do just to catch every wrench-in-gears value that might pass by, it is a miracle. If you disagree, you need only ask yourself how frequently you have been getting NullPointerExceptions. Adopt Optionals, and you won’t get them anymore.

Older Java Solutions

In previous versions, several further techniques have been added. The biggest problem with “!= null” is that it is an operation, and a mandatory operation, which will slow down code very slightly. This is imperceptible for the vast majority of programs, but if you need something to run searing fast, then it can be unacceptable.

If you are writing an API, I might suggest funnelling all input through a defensive checking method before passing it along to the meat methods; but if you are writing code that only you will access, there is a simpler solution: assertions. This is particularly true for unit testing with programs like JUnit.

This is exclusively functional during development, as in order to enable assertion testing, you need to pass the -ea parameter to the java compiler. Unless you can force this on users, it is exclusively meant to help you identify routes by which null, or any other unacceptable value, can make it to your methods.

The syntax is simple. Given parameter “x”:

assert x != null : "[error message]";

If the provided boolean expression evaluates to false, an AssertionError is thrown, with a message of the toString() value of whatever was passed on the right (for me, most typically an actual string).

I don’t generally like to see assertions making their way into production code today, as I am inclined toward Optionals; but this is quite effective for debugging. Additionally, such statements can be considered an essential part of JUnit tests. If you are in a rush, it is possible to ignore all assertions remaining in a slice of code by removing the “-ea” parameter from the compiler; but on the human end, this is bad practice and worth avoiding.

As an alternative, Apache Spring has a class called Assert which handles more or less the same tasks as the assert keyword.

Broader Solutions for Object Oriented Languages

At last, in the most general sense, there is the Null Object Pattern. This is, still, my ultimate preference when building a set of classes, as there is no need for Optional when null never enters the equation.

A Nullary Object is an object extending the appropriate interface, with defined behavior, denoted as equivalent to null. This has its ups, and its downs. As an example, suppose we had this interface:

public interface Animal {
    public String speak();

with these implementations:

public class Dog implements Animal {
    public String speak() { return "bark"; }

public class Cat implements Animal {
    public String speak() { return "meow"; }

public class Bird implements Animal {
    public String speak() { return "tweet"; }

And we had one further class that requires one unknown animal, which will indubitably call “speak()”. Which animal is beyond our control, and we don’t want our program to crash on a NullPointerException simply because no animal was specified. The solution is one further class:

public class NullaryAnimal implements Animal {
    public String speak() { return "…"; }

In the case of abstract classes, it is often helpful to have the nullary class be a member of the class itself. This is also particularly helpful when there are multiple behaviors which might, otherwise, be implemented as “null”. The potential down side is for people who were actually looking for an exception to be thrown; in such a case, simply fill speak() with an Apache Commons NotImplementedException or something relatable.

One extension of this pattern is such:

public abstract class Sequence {
    public static final Sequence ANY = new Sequence(…);
    public static final Sequence ALL = new Sequence(…);
    public static final Sequence NONE = new Sequence(…);

In this instance, a new Sequence can be initialized to Sequence.NONE, ALL, or ANY, and be replaced if a new value is provided. Additionally, since these are actual objects and constant values, they respond appropriately to equals checks.

There may be a name for this pattern, I’m honestly not sure. I came up with it on my own, but I very much doubt that I’m the first.


Hardly. However, you now hopefully have a new set of tools to keep unfinished declarations and, even worse, “= null” statements out of your program. I hope I’ve made your life easier!

Leave a comment

Posted by on January 10, 2015 in Programming


Tags: , , , , , , , , , , , , , , , ,

Software Language Engineering: Analysis

(Early Edition)

So, at this point we have a clearly defined Backus-Naur form for our grammar, a working scanner for terminal tokens, and a working parser for non-terminal generation and the construction of the abstract syntax tree.

The biggest hurdles are over. However, they weren’t quite the last. One thing that must be done before any compiler or interpreter can be built is the construction of a utility for analysis. In this section, I’ll be describing the basic contract for a contextual analyzer. In the next section, I’ll be showing you some example code.


Some of the analysis was already done. The core concept to keep in mind, when building a build tool, is the fewer the passes, the quicker your tool will run. Basic syntax errors have already been detected; as they prevent the construction of the AST for our language. However, you might notice that there are a few obtuse things that you can still do.

Enter “3 + 4 = 2” into the algebra tool, and you’ll notice that it will gobble it down just fine; even though it is concretely incorrect. This is where the second phase of analysis comes in.

Can we sweep for these while we generate the abstract syntax tree? Does your processor have more than one core? Then we absolutely can. Even if you are using a single-core, the penalty would be rather mild. However, it is important to recognize that code analysis is the work of a separate module.

Types of Analysis

There are two major forms of analysis to worry about: syntactic analysis, and contextual analysis.

Syntactic Analysis

Syntactic analysis is almost always the responsibility of the parser. Contextual analysis depends on it. Syntactic analysis is the process that generates the phrase structure of the program; without it, further phases are obliquely impossible. It’s more commonly known as parsing, and if you’re following this tutorial in sequence, you’ve already done it. If not, there are four preceding chapters dedicated to explaining it in detail.

Generally, I recommend, on the establishment of a syntax error during syntactic analysis, simply skipping the node and checking for what might be next. This is not an issue for small programs, much less one-line programs; but for larger utilities and libraries it is vanishingly uncommon for the number of bugs to be limited to one. Often, knowledge of the effect on another, later, mistake is critical to the creation of a satisfactory solution,

As a side effect of continuing the scan, the error reporter may have a hundred additional syntax errors to report, even though they all reference the same mistake. This can explode exponentially. Accordingly, for a final edition of a builder, it is best to limit the number of reported errors before the program calls it quits. For Javac, the limit is a hundred errors, unless the -Xmaxerrs and -Xmaxwarns flags are set to a higher value.

On the completion of syntactic analysis, without error, we have a singular tree with a single root node, most commonly called Program. If syntactic analysis does not complete properly, it is still possible to proceed to contextual analysis, but no further, as erroneous code has an arbitrary interpretation. Computers require determinism.

Contextual Analysis

So, as of contextual analysis, we have a complete abstract syntax tree. The remaining question is, does the correctly formed code also conform to the controls of the language? Is a symbol used before declaration, when the language demands that it not be? Is a variable used outside of its proper scope? Are there duplicate declarations, without any rule for how to handle those declarations? The general rule is that if you cannot establish the analysis rule in BNF, then it is contextual.

After the contextual analyzer has completed its task, given that there are no show-stopping errors, it returns an AST as well. In this case, it is what’s known as a decorated syntax tree. Every user-defined symbol will maintain a node reference to its declaration in the AST, Every expression, for a language concerned about type, is demarcated with its result type.

You may remember, from the introduction to Backus-Naur Form, that it was designed for “context-free grammars”. The term “contextual analysis” more literally means analyzing extensions to the grammar that supersede the domain of BNF.

The best way to think of a proper decorated syntax tree is as an abstract syntax tree, with which any node can be taken at random and read from beginning to end, which forms a complete, definite, and concrete statement.

Procedure of Analysis

Like every class, we must begin with a concrete description of its contract. This includes its responsibilities, and the resources made available to it. Its responsibility, in broad summary, is to find every occurrence of a contextual unknown and link it to its definition. Resources include the code itself as an abstract syntax tree, and a concrete error reporter.

Every analysis tool, the parser included, must be initialized with an error reporter. It is not recommended to make the error reporting functionality ingrained to the class, as it is often best the same error reporter used by parser (your syntax analyzer), and functionally, it has a very different contract—one class, for one responsibility.

We again apply the visitor pattern, much as we do for syntax analysis. Is it possible to use the same visitor pattern for both syntax analysis and context analysis? Technically, yes, but it is discouraged, as syntactical analysis and contextual analysis are two separate contracts. It is possible to feed the incomplete abstract syntax tree to a waiting context analyzer, but this is a tactic more sophisticated than we are ready for at this juncture. I’ll probably return to it in the final section.

To my knowledge, there is not yet a BNF-equivalent for non-context-free grammars that can easily be used for context analysis. This is not to say that there are none; if you insist on following the same pattern that you did for syntax analysis, you may consider Noam Chomsky‘s formal grammar. It uses a lot of unconventional symbols, so you may also consider getting accustomed to using a compose key.

As formal grammars, unless you are working with a set of people who are fully informed on their usage, go well outside of the bounds of this tutorial, I suggest considering the depth of complexity of your contextual grammar before resorting to them. What you will definitely need is a clear and inarguable description of what these rules are, even if it is in plain English.

The context analyzer will also, for most languages, be creating an identification table as it works. Perhaps your target language does not use variables, and has no need for one; I am assuming that it does. It is also possible that your target language does not mind late definitions, as long as there are eventually definitions. It would not be the first. For my algebra solver, I am currently assuming that it does mind; but later on, perhaps I’ll reformat it so that it doesn’t. Subsequent definitions, or even a loosely related concept called “late binding”, It isn’t as hard to do as you might initially think.

Summary Abstractions

We’ll need an abstraction of the core context analyzer. While I chose to call the syntax analyzer “Parser”, a more common term, there is no equivalent that I am aware of for the context analyzer. Thus, we’ll call it “ContextAnalyzer”. I propose a single method in ContextAnalyzer, called check(AST ast). This will initiate the visitor pattern.

Once I complete the code, I’ll highlight it to you in the next lesson.


Tags: , , , , , , , , , , , , , ,

Software Language Engineering: Establishing a Parser, Part Two (Early Edition)


So, you’ve ready part one, and you’re at least familiar with a visitor pattern, right? If not, I strongly encourage reading the two injected sections first.

A parser delegates the vast majority of its work to a Visitor. More appropriately stated, it depends upon the Visitor in order to do its work, as the Visitor is responsible for creating the requested nodes.

PhraseStructure classes

I have a simple extension of Visitor which I have created purely for the sake of future modifications. It’s called PhraseStructure. At the moment, it looks like this:

package oberlin.builder.parser;

import oberlin.builder.*;
import oberlin.builder.parser.ast.AST;
import oberlin.builder.visitor.Visitor;

import java.util.*;

public interface PhraseStructure extends Visitor {

….which makes it a marker interface. However, should you or I choose to add specific behavior to the Visitor which strictly relates to this program, it’s an excellent low-footprint stand-in.

The point where it, and by that I also mean Visitor, shows its worth is in AlgebraicPhraseStructure.

package oberlin.algebra.builder.parser;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.function.BiFunction;

import oberlin.builder.parser.Parser;
import oberlin.builder.parser.PhraseStructure;
import oberlin.builder.parser.SourcePosition;
import oberlin.builder.parser.ast.AST;
import oberlin.builder.parser.ast.EOT;
import oberlin.algebra.builder.nodes.*;

public class AlgebraicPhraseStructure implements PhraseStructure {
    private Map<Class<? extends AST>, BiFunction<Parser<?>, 
        SourcePosition, ? extends AST>> map = new HashMap<>();
        map.put(Program.class, new BiFunction<Parser<?>,
                SourcePosition, AST>() {
            public Program apply(Parser<?> parser,
                    SourcePosition position) {
                Program program = null;
                SourcePosition previous =
                AST currentToken = parser.getCurrentToken();
                Equality equality = (Equality) parser.getVisitor()
                        .visit(Equality.class, parser, previous);
                program = new Program(previous, equality);
                if(!(currentToken instanceof EOT)) {
                    parser.syntacticError("Expected end of program",
                return program;
        map.put(Equality.class, new BiFunction<Parser<?>,
            SourcePosition, AST>() {

            public AST apply(Parser<?> parser,
                    SourcePosition position) {
                Equality equality = null;
                List<AST> nodes = new ArrayList<>();
                SourcePosition operationPosition =
                    new SourcePosition();
                //parse operation
                AST operation = parser.getVisitor().visit(
                        Operation.class, parser, operationPosition);
                if(parser.getCurrentToken() instanceof Equator) {
                            Operation.class, parser,
                } else {
                    parser.syntacticError("Expected: equator",
                equality = new Equality(operationPosition, nodes);
                return equality;
        map.put(Operation.class, new BiFunction<Parser<?>,
            SourcePosition, AST>() {

            public AST apply(Parser<?> parser,
                SourcePosition position) {
                Operation operation = null;
                List<AST> nodes = new ArrayList<>();
                SourcePosition operationPosition =
                    new SourcePosition();
                //parse identifier
                AST identifier = parser.getVisitor().visit(
                        parser, operationPosition);
                //look for operator
                if(parser.getCurrentToken() instanceof Operator) {
                            parser, operationPosition));
                operation = new Operation(operationPosition, nodes);
                return operation;
        map.put(Identifier.class, new BiFunction<Parser<?>,
            SourcePosition, AST>() {

            public AST apply(Parser<?> parser,
                SourcePosition position) {
                Identifier identifier = null;
                List<AST> nodes = new ArrayList<>();
                SourcePosition identifierPosition = new SourcePosition();
                if(parser.getCurrentToken() instanceof LParen) {
                            .apply(parser, identifierPosition));
                } else if(parser.getCurrentToken()
                    instanceof Nominal) {
                } else if(parser.getCurrentToken()
                    instanceof Numeric) {
                } else {
                            "Nominal or numeric token expected",
                identifier =
                    new Identifier(identifierPosition, nodes);
                return identifier;
    public Map<Class<? extends AST>, BiFunction<Parser<?>,
        SourcePosition, ? extends AST>> getHandlerMap() {
        // TODO Auto-generated method stub
        return map;

For all of the code, you’ll note that there’s only one method. getHandlerMap() returns a map, intrinsic to the PhraseStructure, which maps classes (of any extension of AST) to functions which return them. These functions, specifically BiFunctions, accept only a Parser, with all of its delicious utility methods, and a SourcePosition so that they have an idea where they’re looking. All necessary data is in those two items alone.

A Note on Source Position

If you’ve been paying very close attention, you may have noticed that SourcePosition isn’t strictly necessary to translate. You’re right, mostly; but when something goes wrong, it is SourcePosition which tells you where the problem showed up, and what you need to tinker with in order to properly format the program.

It wasn’t always like this. Early compilers would simply indicate that the program was misformatted. More likely, just print “ERROR”, as the notion of software development (which didn’t involve punching holes in punch cards) was relatively young, and ethics weren’t really a thing yet.

This wasn’t a big deal, while programs were generally only a few lines and had exceedingly small lexicons of keywords. When Grace Murray Hopper put together A-0, the idea of adding sophisticated error reporting would have seemed like over-programming; mostly because it would have been over-programming.

As time went on, and machines got more sophisticated, having an error in your code could take days to find. If you had more than one error, then you were really in trouble. So, eventually, a team came up with the idea of reporting the exact point where the format failed, and history was made. (I’m not sure who that was, so if anyone knows, please inform me through the comments.)

Today, every well-designed AST is aware of exactly where it, or its constituents, begin and end. If you want to be especially sophisticated, you can have it remember line number, and even character number, too.

Our current edition of our algebra-like language is generally one-line-only and relatively domain specific, but memorization of where the ASTs go wrong provides room for growth.

Visit Handlers

If you don’t remember specifically, Visitor’s visit is not a complicated method.

public default AST visit(Class<? extends AST> element,
        Parser<?> parser, SourcePosition position) {
    AST ast = getHandlerMap().get(element).apply(parser, position);
    return ast;

It simply retrieves the map, grabs the BiFunction associated with the provided element, and applies it to the parser and an initial source position. From there, all work goes on in the map.

The visit handlers themselves can get pretty messy, if you aren’t careful. They begin by initializing their specific brand of AST to null. A NullaryAST or an Optional might be better here, as I have a serious aversion to methods that can return null, but I haven’t made that change yet. This AST is the item which will be initialized through the context of local nodes.

Next, a SourcePosition is initialized. This will be the element passed to the constructor for our AST. When Parser.start(SourcePosition) is called, it updates the starting point of SourcePosition. When Parser.finish(SourcePosition) is called, it updates the end point. These are set to Parser’s currently known coordinate in the code. Thus, before anything else is done, Parser.start(…) is called.

After the SourcePosition has been started, the class of each token is checked against allowed conditions. As such, the bulk of these methods are conditionals. It’s here that I must explain the usage of Parser.accept(…) and Parser.forceAccept().

Parser.accept(…) checks the class of the current token against the provided one, and if they match, increments the internal pointers. If not, it reports a syntactic error, and leaves the pointers alone. Since the pointer is left alone, additional nodes can still be parsed, and multiple errors can be caught, even in the sake of a token simply being missing or skipped. Parser.forceAccept() always accepts the current node, regardless of its type, and increments the pointers. (In fact, it is called from within accept(…) after the conditional checks are completed.

Once all possibilities have been checked for, the AST is initialized and returned. If at any point no possibilities remain for this token, a syntax error is thrown, and the program continues to parse (even though it cannot complete the tree).

Is There Another Way to Do This?

There’s always another way, but that doesn’t mean that it’s necessarily better. One method might be catching customized exceptions on a distinct schedule, which also works pretty well; the down side is that it only allows for the detection of a single error at a time. Another would be the construction of a string representing the AST types, and usage of a regular expression on it; but as I’ve said before, the construction of improper code, even if it compiles, can create devastatingly slow regular expressions at seemingly arbitrary times.

I’ve experimented with both on the way to this code, which is precisely why writing this section took so much longer than the others. There are probably dozens of other readily available methods which I haven’t even thought of yet. One of them, somewhere, might even be faster or sufficiently more effective than the visitor pattern.

This is not me saying that the visitor pattern is perfect, either. This implementation of visitor has a lot of marks against it. It is extremely tightly coupled, for starters, as loose as the interface alone may be. It uses “instanceof” all over the place, which begs for the implementation of further methods to keep to an OOP standard. It has many anonymous classes around, which substantially increase the memory footprint. The slightest of mistakes in the layout of the visitor functions will result in an unbounded recursion, which will quickly and almost silently crash your program, so it is not recommended for avant garde programming—always start with a properly reduced Backus Naur Form of your language. I could go on, such as with the many potential issues with secondary delegation, which the visitor pattern survives on, but this more than covers it.

My advice? Use ethical method names, comment until your fingers bleed, trigger exceptions everywhere the program pointer shouldn’t be, and benchmark benchmark benchmark. In select cases, the Visitor is still your friend, provided that you treat it like a sacred relic wired to a bomb.

Final Notes

You may notice that this is awfully similar to the enumeration used for scanning. You can, in fact, create a Scanner from a Parser, by treating every character as an individual token. However, this has not been done in a long time, as regular expressions are quite reliable for cases like this. I may yet develop a Scanner from a Parser, but only as an example, this does not mean that I recommend it.

You can think of the individual differences between one language and another as the impulse behind these enumerations and mappings. Parser will always be Parser, PhraseStructure will always be PhraseStructure. However, when you need to compile a specific language into an AST tree, the features that make that language what it is can all be stored in the enumerations and maps. Because of this, this API allows for rapid construction of builders.

Next, we talk about identification tables.


Tags: , , , , , , , , , , , , , ,

Visual Feedback on an Abstract Parsing Tree with JavaFX

I honestly didn’t expect to be writing this, but it seems fair.

In the past few editions, I’ve been discussing the AST. It can be overwhelmingly complicated for a complete program; so I’ve been using a simple, single line equation as the sample. Unfortunately, that isn’t very realistic; and it would be very helpful to have a procedurally generated visual tree available. That tree is what this lesson is all about.

At first I considered using a graphical tree style, like javax.swing.JTree; but that can be painfully over-simplistic in itself. I would prefer to outline the material the same way I would draw it on a white board (which, if you’re wondering, I do). The best way to do this? JavaFX.

JavaFX whiteboard abstract syntax tree

Graphical AST tree rendering, through JavaFX/F3

If you aren’t familiar with JavaFX, please do me a favor and tolerate the name. It was originally F3, for Form-Follows-Function. I kind of liked F3, until some marketer decided that “JavaFX” sounded better. Functionally speaking, it’s an excellent revision on how user interfaces are designed in Java. I fully stand behind it. It allows for XML structuring and CSS styling, just like a web page, to more hard-coded controls. This is much, much faster; and it allows for significant beauty in user interfaces. However, it works very differently from things like Swing and AWT; and while I’m certain that it isn’t the first API to do so, it takes some getting used to.

I fully intend to write a true tutorial on all of JavaFX on some point. Do you need to understand it to understand translators? Absolutely not. However, this code does work. It is not part of the Github repository, as it is technically a tangential project; but the same license (GNU GPL) applies to it and you are welcome to copy it token for token. I’ll put it up on Github as I get the chance. I’ll make a few minor comments along the way to help you follow it.

1. The Basic Application

We have exceedingly few needs for our app. It simply reads a program from a stream, parses it, and feeds the parse tree to a custom node, which displays it graphically. Accordingly, the program code is rather small. I’ll begin by displaying it, then I’ll spend a moment piecing it together in English for you.

package oberlin.builder.gui;

import oberlin.algebra.builder.AlgebraicBuilder;
import oberlin.builder.parser.ast.AST;
import javafx.application.*;
import javafx.scene.*;
import javafx.scene.layout.*;
import javafx.stage.*;

public class GUIMain extends Application {

    public static void main(String...args) {

    public void start(Stage primaryStage) throws Exception {
        Pane root = new Pane();
        Scene scene = new Scene(root);
    private void populate(Pane p) {
         * This is simply an example, so I've ignored input for now.
         * In theory, you would replace the line below (containing
         * hard code) with an input loop.
        AST program = (new AlgebraicBuilder()).getParseTree("1+2");
        p.getChildren().add(new GUITree(program));

All JavaFX/F3 programs begin with Application.launch(String…args). JavaFX programs run in what is effectively their own thread, and more so than with Swing-based programs. Launch parses arguments and stores them in their own object, appropriately called Parameters. They can be accessed, at any point later on, via Application.getParameters(). Our available overloads and customizations cut out for a moment, then come back in in the start(Stage) method.

Stage is basically where Frame would be; but it’s a little more complicated than that. Unlike Swing and AWT, which were designed to be platform independent, JavaFX is designed to be hardware context independent. What you are writing here will work equally well on a PC, tablet, and smart phone; as well as anything else built (now or later) that maintains a JavaFX compatibility standard. Thus, what might otherwise be called a frame or window is referred to as a stage, as it might be neither of those things.

You’ll notice that the instantiated Pane is given a style class. If you aren’t familiar with CSS, a style class is what’s used to differentiate between one element and any number of others which, otherwise, would look exactly like it. Thus, it allows CSS to pick and choose which elements of the layout it is styling at a given moment. I’ve chosen “backing” as the name for this element, as it is the backboard of our tree. You will also note that, two lines later, the CSS file itself is loaded.

Next, a Scene is created. Scenes are critically important, and distinct from stages. While a stage represents the context that the layout is drawn in, the scene represents the actual controls and constraints within that space. Thus, while many aspects of Stage are immutable (and unknowable), Scene allows for greater flexibility. JavaFX sees to it that they correspond, so don’t worry about that.

Scene is styled through its root element, which in this case is our pane. You’ll notice that instead of the stricter setWidth() and setHeight() that you might be familiar with from Swing, we are setting a minimum on these bounds. That minimum is not guaranteed, as the display may not be capable of it, but it is treated as a general rule to be followed if at all possible. In this case, I’m going for classic analog low-def TV resolution, 640 width by 480 height. (Looking back, those numbers might be inadequate, but for now they’re quite functional.) If this is too small for you, the frame—if it is a frame, anyway—is easily resizable.

Populate() is a method I wrote to add the paraphernalia to the scene; but note that afterwards we call show(). This is very important, as otherwise our stage will be constructed in memory, but never displayed to the screen. Additionally, there will be no way to kill the JavaFX thread save for a hard interrupt. Once shown, the closing of the primary stage will flag the program to terminate.

1.1. Populate

It’s a generally good habit, but not a necessary one, to populate your frame in a separate and dedicated method. This is what I do here, even though for the moment, I only have one control to add.

The AST method should be old news; it’s a stand-in, for the moment, for an actual code-reading portion. (I’m assuming that you’re looking to compile more than just “1+2”.) GUITree is a custom JavaFX node, which I will explain next. Note that to add a node to a program, you must take some structure (not yet visible) in the scene graph (stemming from your chosen root), and get its Children as a modifiable list. Then, you must add that node to the list.

Note that after a stage is visible, precious little of the scene can be changed save for through the constraints built into it. I’m not going to touch on Expressions and Bindings here, but know that if you pull something that doesn’t play by JavaFX’s rulebook, it will throw an ApplicationException and your program will not launch. Thankfully, while exceedingly picky, that rulebook is small. If you call show() and then try and add a child, you will have problems; it must be the other way around.

If you’re curious, hiding a rendered stage does not count for making it modifiable. You must give it your entire concept first, then make it visible. If you’re familiar with OpenGL, you’ll already understand why.

2. The Tree Itself

The tree is a custom JavaFX node, which I admit is rarely necessary. Still, most of the entities that make it work are core to the API.

package oberlin.builder.gui;

import oberlin.builder.parser.ast.AST;
import javafx.geometry.BoundingBox;
import javafx.geometry.Bounds;
import javafx.geometry.Point2D;
import javafx.scene.control.Tooltip;
import javafx.scene.layout.*;
import javafx.scene.paint.Color;
import javafx.scene.shape.CubicCurve;

import java.util.function.IntSupplier;

public class GUITree extends AnchorPane {
    private Bounds bounds = new BoundingBox(0, 0, 640, 480);
    private AnchorPane framing = new AnchorPane();
    private double edgeSize = 0.10;    //ten percent additional length beyond edges of framing
    public GUITree(AST ast) {
        this.setMinWidth(bounds.getWidth() * (1 + edgeSize));
        this.setMinHeight(bounds.getHeight() * (1 + edgeSize));
    private void configureFraming() {
        framing.setLayoutX(edgeSize * (bounds.getWidth() / 2.0));
        framing.setLayoutY(edgeSize * (bounds.getHeight() / 2.0));
    private ASTNode addNode(AST ast) {
        return this.addNode(ast, new Marker(0), new Counter(), 0, null);
    private ASTNode addNode(AST ast, IntSupplier stepsDown, IntSupplier stepsAcross, int index, ASTNode parent) {
        ASTNode node = new ASTNode(ast, stepsDown.getAsInt(), stepsAcross.getAsInt());
        //AnchorPane stuff
        calculateAnchoring(node, parent);
        framing.getChildren().add(index ++, node);
        final StringBuilder tooltipText = new StringBuilder();
        IntSupplier across = new Counter();
        for(AST kid : ast.getContainedNodes()) {
            tooltipText.append(kid.getClass().getSimpleName()).append(" ");
            ASTNode child = addNode(kid,
                    new Marker(stepsDown.getAsInt() + 1),
            CubicCurve cubic = createCubicCurve(node.getNoodleRoot(), child.getTopCenter());
        node.getType().setTooltip(new Tooltip(tooltipText.toString()));
        return node;
    private CubicCurve createCubicCurve(Point2D p1, Point2D p2) {
        CubicCurve curve = new CubicCurve();
        return curve;
    private void calculateAnchoring(ASTNode node, ASTNode parent) {
        node.setOrigin(new Point2D(parent == null ? (bounds.getWidth() - node.getBounds().getWidth())/2.0 :
            justifyX(node, parent), justifyY(node)));
        AnchorPane.setTopAnchor(node, node.getOrigin().getY());
        AnchorPane.setLeftAnchor(node, node.getOrigin().getX());
    private Double justifyX(ASTNode node, ASTNode parent) {
        final double parentCenter = (parent.getOrigin().getX() + (parent.getBounds().getWidth() / 2.0)
                + parent.getNoodleRoot().getX()) / 2.0;
        final double center = parentCenter
                - node.getBounds().getWidth()
                        * (parent.getAST().getElementCount()) / 2.0; 
        return center + node.getOrigin().getX();
    private Double justifyY(ASTNode node) {
        return node.getOrigin().getY();

That was a bit much at once, I know. The central pane, called “framing”, is 640 by 480. Framing is offset in each direction by a 5% inset, via the convenient features of AnchorPane.

AnchorPane is one of the few prepared ways to control where a node is rendered, with precision, in JavaFX. You may often need to keep your own tabs on where it is rendered, as getMinX() and getMaxX() will return zero more often than you will believe. However, through direct layout control, you can still manage them.

The method addNode(…) adds a custom object called ASTNode. I’ll cite it for you here.

package oberlin.builder.gui;

import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.geometry.BoundingBox;
import javafx.geometry.Bounds;
import javafx.geometry.Point2D;
import javafx.geometry.Pos;
import javafx.scene.control.Label;
import javafx.scene.layout.StackPane;
import javafx.scene.layout.VBox;
import javafx.scene.text.TextAlignment;
import oberlin.builder.parser.ast.AST;

class ASTNode extends VBox {
    private Bounds bounds = new BoundingBox(0, 0, 100, 40);
    private Point2D origin = new Point2D(0, 0);
    private final double expanse = 1.10;
    private final AST ast;
    private Label type;
    private Label hash;
    private ObservableList<ASTNode> kids = FXCollections.observableArrayList();
    public ASTNode(AST ast) {
        this.ast = ast;
        type = new Label(ast.getClass().getSimpleName().toString());
        hash = new Label(Long.toHexString(ast.hashCode()).toUpperCase());
        VBox vbox = new VBox(new StackPane(type), new StackPane(hash));
        for(AST kid : ast.getContainedNodes()) {
            addKid(new ASTNode(kid));
    public Point2D getNoodleRoot() {
        return new Point2D(getOrigin().getX() + (getBounds().getWidth() / 2),
                getOrigin().getY() + getBounds().getHeight());

    public ASTNode(AST ast, int level) {
        origin = new Point2D(0, level * bounds.getHeight() * expanse);
    public ASTNode(AST ast, int levelDown, int levelAcross) {
        origin = new Point2D(getStepAcrossSize(levelAcross), getStepDownSize(levelDown));
    public double getStepDownSize(int steps) {
        return steps * bounds.getHeight() * expanse;
    public double getStepAcrossSize(int steps) {
        return steps * bounds.getWidth() * expanse;
    public void addKid(ASTNode astNode) {;
    public ObservableList<ASTNode> getKids() {
        return kids;
    public Bounds getBounds() {
        return bounds;
    public Point2D getOrigin() {
        return origin;
    public Point2D getTopCenter() {
        return new Point2D(
                getOrigin().getX() + (getBounds().getWidth()/2),
    public Label getType() {
        return type;
    public void setOrigin(Point2D p) {
        this.origin = p;
    public AST getAST() {
        return this.ast;

ASTNode is a JavaFX Node as well. It simply maintains a reference to the AST itself, and the general presentation of that AST on the tree. There isn’t a lot here. If you’re wondering what VBox is, it’s an abbreviation for “vertical box”. (Naming a class after an abbreviation is bad practice, but it’s long since done by powers above me; I tolerate it as much as I do “AST”.)

Speaking of bad practice, this would ideally actually use Bindings, but I wrote this in a bit of a rush today and will have to correct that in the future. It is also bad practice to repeat data, which is exactly what this program is doing by re-storing the label text in a separate field. All the same…

I’m going to gloss over a lot of the configuration of the labels, as it’s relatively standard. Know that like any other pane in JavaFX, a VBox can be initialized with a list of its bounded nodes; also, a StackPane has the default behavior of centering its own bounded nodes.

The last thing done in the constructor is the creation of additional ASTNodes for each child node of the abstract syntax tree.  Each of them, in turn, renders their own children. This is not perfect, there is a substantial chance that two lists of nodes will overlap one another; however, it is already excellent for debugging visitor pattern based content. In the end, the GUITree renders each node in an assigned place, with a curved cubic line (technically called a “noodle”) connecting it to its parent and its children.

How does it do that? With IntSuppliers.

3. The IntSuppliers

There are only two of these.

package oberlin.builder.gui;

import java.util.function.IntSupplier;

 * For downward counts; always returns provided number.
 * @author © Michael Eric Oberlin Dec 23, 2014
class Marker implements IntSupplier {
    private int fix;
    public Marker(int fix) {
        this.fix = fix;
    public int getAsInt() {
        return fix;

package oberlin.builder.gui;

import java.util.function.IntSupplier;

 * For counts across; always returns next consecutive number.
 * @author © Michael Eric Oberlin Dec 23, 2014
class Counter implements IntSupplier {
    private int count;
    public int getAsInt() {
        return count++;

IntSuppliers (and really all Suppliers) are part of the java.util.function package, new to Java 8. The great advantage of this package is that a function, or any functional interface, allows you to specify a method that serves as a primitive with conditionally defined values. I know that’s a leap, but I’ve been doing it since long before it was formally adopted into the language and it’s a central totem of functional languages.

We could, in theory and practice, use incrementing and decrementing integers in place of either of these. The problem is that the code gets a lot longer and a lot more cluttered when you do. I prefer the sublime simplicity of packing such behavior into an interface.

Of course, these are not everything. There is one, final, issue.

4. What was that that you said about “CSS”?

The CSS is specific to JavaFX; a complete listing of all of the properties is available here. If you are unfamiliar with the syntax of CSS, you can find an excellent tutorial on it (for HTML, at least) at W3Schools. It isn’t as versatile as Java or C, but its creators pulled many of its properties from C-like languages.


.backing {
    -fx-background-color: lightyellow;
    -fx-insets: 0;
    -fx-padding: 15;
    -fx-spacing: 10;

.node {
    -fx-background-color: lightblue;
    -fx-background-radius: 5.0;
    -fx-border-color: black;
    -fx-border-radius: 5.0; 

Keep this in the same folder as GUIMain, and it will find it as written.

The CSS styling of JavaFX controls is capable of everything HTML 5 is and then some. It’s an excellent fusion of programming and markup. I encourage you to play with the layout of GUIMain’s scene, and the actual program fed to the builder.


Tags: , , , , , , , , , , , , , , , ,

Software Language Engineering: The Visitor Pattern

This may feel like a slight detour; but believe me, it’s a necessary one. If you are already fully familiar with the Visitor pattern, you are free to skip this section.

The Visitor pattern is hardly the only way to handle grammar parsing; but I’ve been trying to find a better one for a couple of weeks now, and without much success. Thus, I must recommend it.

The Visitor pattern functions as a way to operate on a structure of objects, like our Abstract Syntax Tree, while evading the encapsulation of the algorithm within those objects. It is an approach from the outside, it visits; thus, the name. Unlike most design patterns, you can actually implement the approach, for most purposes, in a single package. I’ll explain how to do than in the second part of this addition.

1. The Recursive Nature of the Visitor Pattern

First, a word on the unusual structure.

Every visitable object is referred to as an “element”; and every element has an inherent method which receives a single visitor object as its parameter. Additionally, every visitor object has a method which receives an element as its singular parameter.

This may sound rather circular, but the meat of the work is done by the visitor’s method. When visit is called, with a specific instance of the element as its parameter type, a window opens to perform specific operations on the available materials (fields and methods) in the element. This window does not require any specific change to the element’s code.

So when is this visit method called? Every time the element’s inherent visitor-receiving method is called. Is this roundabout?

No. The visitor-receiving method may be called again, on other, related, objects, by the visitor’s element-receiving method. You can exit the circle any time you would like.

For the sake of clarity, imagine that the element’s method is called receive(Visitor visitor), and the visitor’s method is called visit(Element element). I’ll give you a more tangible example.

1.1. The Visitor Pattern at Work

Suppose we have an object structure describing a bicycle. Each bicycle part is an extension of BicycleElement, which implements our accept method. The parts include a number of tires, two pedals, a handbrake, a chain, a seat, and a frame. Each of these elements is interchangeable and has a score of properties that belong to it alone.

Putting together such a class structure is trivial for a Java programmer; but what if we want to inventory the parts of a bike? Display their features on an output stream (for simplicity, System.out), as an example? At the same time, this analysis software must remain separate from the software describing the part, in the name of the open/closed principle and the single responsibility principle. At least, if you’re a principled programmer, with the least bit of concern for the next person to touch your code, you want to follow those.

The problem is solved through double-dispatch. Your analysis class, let’s call it Analyzer, might have a set of Visitors that expect to receive each category of BicycleElement. Once your CompleteBikeElement is prepared, Analyzer would call its accept method with a class of Visitor. The accept method would call the visit method with itself (and all properties exposed), and the visit method would display the properties of the BicycleElement.

Naturally, you might be wondering how you would get the properties of every BicycleElement when only the CompleteBikeElement was passed to the Visitor. That’s where receive comes in. CompleteBikeElement has total access to each of its parts, and can easily pass a visitor (maybe the same one) to each of them.

2. Implementing a Visitor Pattern in a single package

I’ve implemented this in the SLE git under oberlin.builder.visitor.

You only need three classes; Element, Visitor, and VisitHandler. In fact, you arguably only strictly need Element and Visitor. Their code is fairly straightforward, remembering that they are all abstract.


package oberlin.builder.visitor;

public interface Element {
    public void accept(Visitor visitor);

Visitor (feel free to be a little creative with this one):

package oberlin.builder.visitor;

import java.util.Map;

public interface Visitor {
    public default void visit(Element element) {
    public Map<Class<? extends Element>, VisitHandler> getHandlerMap();
    public default void addVisitHandler(Class<? extends Element> elementClass, VisitHandler handler) {
        getHandlerMap().put(elementClass, handler);
    public default VisitHandler getVisitHandler(Class<? extends Element> elementClass) {
        return getHandlerMap().get(elementClass);


package oberlin.builder.visitor;

public abstract class VisitHandler {
    public abstract void handle(Element element);

Done. Now, as an aside to get the point across, let’s build that little bicycle project I was talking about.

2.1. Using the Visitor Package

Let’s start with our bike parts. For the sake of brevity, I’m leaving out package and import statements as they’re fairly self-explanatory.

public class ChainElement implements Element {

    public void accept(Visitor visitor) {


public class FrameElement implements Element {

    public void accept(Visitor visitor) {


public class HandlebarElement implements Element {

    public void accept(Visitor visitor) {


public class PedalElement implements Element {

    public void accept(Visitor visitor) {


public class SeatElement implements Element {

    public void accept(Visitor visitor) {


public class TireElement implements Element {

    public void accept(Visitor visitor) {


They could be anything, of course; in this instance, we’re leaving the actual work to Visitor, and how it gets there to Element. It might even be wise to make it a Java 8 Default Method.

Lastly, the complete bike:

package bicycle;

import oberlin.builder.visitor.Element;
import oberlin.builder.visitor.Visitor;

import java.util.*;

public class CompleteBikeElement implements Element {
    private FrameElement frame = new FrameElement();
    private SeatElement seat = new SeatElement();
    private PedalElement leftPedal = new PedalElement();
    private PedalElement rightPedal = new PedalElement();
    private TireElement frontTire = new TireElement();
    private TireElement backTire = new TireElement();
    private HandlebarElement handle = new HandlebarElement();
    private ChainElement chain = new ChainElement();
    public void accept(Visitor visitor) {


So, what does our visitor look like?

package bicycle;

import java.util.HashMap;
import java.util.Map;

import oberlin.builder.visitor.Element;
import oberlin.builder.visitor.VisitHandler;
import oberlin.builder.visitor.Visitor;

public class BikeVisitor implements Visitor {
    Map<Class<? extends Element>, VisitHandler> map = new HashMap<Class<? extends Element>, VisitHandler>();
        map.put(TireElement.class, new VisitHandler() {
            public void handle(Element element) {
                System.out.println("[Tire qualities]");
        map.put(FrameElement.class, new VisitHandler() {
            public void handle(Element element) {
                System.out.println("[Frame properties]");
        map.put(ChainElement.class, new VisitHandler() {
            public void handle(Element element) {
                System.out.println("[Chain manufacturing data]");
        map.put(SeatElement.class, new VisitHandler() {
            public void handle(Element element) {
                System.out.println("[Seat qualities]");
        map.put(PedalElement.class, new VisitHandler() {
            public void handle(Element element) {
                System.out.println("[Pedal qualities]");
        map.put(HandlebarElement.class, new VisitHandler() {
            public void handle(Element element) {
                System.out.println("[Handlebar qualities]");

    public Map<Class<? extends Element>, VisitHandler> getHandlerMap() {
        return map;

And lastly, our analysis program, which doesn’t even need to be in the same package. (Well, it is, but it doesn’t have to be.)

package bicycle;

public class Analysis {

    public static void main(String[] args) {
        CompleteBikeElement bike = new CompleteBikeElement();
        bike.accept(new BikeVisitor());


As you can see, the analysis itself doesn’t have to do much. In the past, a different visitor was implemented for each element, which is still occasionally useful. In this instance, to cut down on the number of classes, I simply cross-reference the class of the element on a map, and retrieve the functional interface with the material to operate on that specific type of element. It’s better practice than using instanceof, not because instanceof is slow, but because it’s usually a red light that you’re overlooking a feature of object-oriented languages. This is also referred to as a “bad code smell“. (Map may also be slightly faster, working it out in my head; but I haven’t done any benchmarking.)

The result?

[Frame properties]
[Seat qualities]
[Pedal qualities]
[Pedal qualities]
[Tire qualities]
[Tire qualities]
[Handlebar qualities]
[Chain manufacturing data]

Of course, any element could have broken itself down further, and may continue to. This is a very flexible pattern, and understanding double-dispatching can lead to some very efficient architectures. When you understand how it works, you’re ready to continue!

A Final Note

Speaking of bad code smell, it is also worth noting that double-dispatching is never to be used unless it is truly necessary. Extensive double-dispatching (or triple-dispatching, or—no. It hurts to think about.) can be very bad code smell in itself. If you’re even a little speculative, you may have noticed that excessively open Visitor patterns are asking for trouble.

Uncontrolled dynamic dispatching is like having your fingers tangled in string, or trying to untie a massive wad of Christmas lights before the holiday because someone put them in a box instead of twist-tying them properly. Dynamic dispatching is the box; used improperly, it promises that a year later, you will be dealing with a massive wad of tied up Labyrinthian code, and you’ll have zero fun untying it. BECAUSE REALLY, WHY DID YOU PUT THE CHRISTMAS LIGHTS IN AN EFFING BOX, MOM!?!?


Tags: , , , , , , , , , ,

The Easy Way to Import from Guava or Apache Commons

In 1991, Sun Microsystems (specifically, James Gosling) answered a long standing question. The Java programming language, at the time called Oak, was established and released to cut development times into small fractions of what they used to be. The word was “write once, run anywhere”, which for the most part was true. A program could be compiled on one machine, and run on any that supported the same virtual machine.

Don’t get me wrong; Oak was a disaster for efficiency. However, it proved that virtual-machine based productive software wasn’t just an idea, it was actually possible. That was huge. So, Oak became Java (a word already very familiar to programmers), and Sun got to work on expanding the API and improving the compiler. Honestly, Java 1.0 was also crap; but it was very exciting crap. Java 2, in my humble opinion, was where it really took off.

During this time (Oak to now), a lot of features were added. Just-in-time compiling, regular expressions, enumerations, recently lambda expressions, and most importantly a gazillion bazillion classes were added to the JDK. All kinds of solutions to what became a very broad class of problems, shortening development time quite a bit for programmers. Surprisingly the internet, and the population of Java programmers, grew faster than Sun could keep up with. It had the added pressure of improving the compiler, too; which didn’t help.

As a response to this frustrating lag, the Open Source community (you may picture it in a hero cape) took off and created a vast assortment of additional libraries. Many of them, such as LWJGL and JOAL, were domain-specific; some of them weren’t. Apache Commons was the first big guy to come in. It’s actually a collection of libraries, the most important of which (at least for me) was the math (now Math3) library. It offered tried and tested methods for handling complex numbers, Fourier transforms, tuples, and all sorts of awesome stuff. That meant that the people, previously using the vanilla JDK, didn’t have to write it themselves. That saved a boatload of time.

Later, Google came up with Guava, their own contribution to the community (fully compatible with Apache Commons). Guava had neat features like bidirectional maps, and very handy byte conversion methods. Much like Apache Commons, it’s expanding all the time.

In olden days (1990s), it was often necessary to have the entire library as a local resource. That means on-disk. This could be an issue, when you only needed a few methods out of something as large as Math3. It is an enormous library, with a lot of binary data. Then came Apache Maven. I don’t intend to describe how to use Maven manually here, it isn’t something I’m an expert at, and it often isn’t necessary; but there are plenty of wonderful tutorials on the internet. I’m going to describe how to use it quickly.

Maven allowed for the inclusion of libraries from a URL, without the need to download the entire library to disk. As more and more computers were online 24/7, this became increasingly feasible. Through a feature of the Maven build tool, a file called pom.xml, the features of the project could be described, and lazily received as needed. The “POM” in pom.xml stands for “Project Object Model”, which is very accurate.

So How Can I Use Maven to Import these Libraries?

My IDE of choice is Eclipse (which is not to say that there aren’t other good ones out there). There’s almost always a utility native to your environment, which should work similarly to this. Eclipse has a plugin which handles Maven directly. To get it, go to the Help menu, and select “Eclipse Marketplace”. Under the marketplace, look up the keyword “Maven”. You will probably have quite a few “m2e” entries, the central one usually starts with “Maven Integration for Eclipse…”, the rest generally depends on your Eclipse version.

Install it, and restart Eclipse. Next up, assuming that your project already exists, you need to create a Maven project out of it. Right-click it, select “Configure…”, and click “Convert to Maven Project”. (If it nags you about the group ID or the artifact ID, it’s because of a naïve algorithm for generating the identifiers from the project name; just remove any spaces and funny characters and try again. The details of what these identifiers are are better left to more detailed tutorials on Maven.) It will set up your Eclipse project as a Maven project as well, specifically, an M2Eclipse project.

You will have a new file called “pom.xml” located in your project directory. There are other ways to do this, but the dependency information is typically provided in raw XML and copy/pasting it is usually fastest. Enter XML mode on the document (currently by clicking the last lower tab, labelled “pom.xml”), and find the end of the “<build>” entries.

Right below the “</build>” tag, enter “<dependencies>”. Eclipse will often fill in the terminating tag for you. Between these two, you may enter the dependency information typically found on the web for the library you are using; such as, for LWJGL:


Or, for Apache Commons Math3:


Or for Google Guava:


Then, clean your project by going to the Project menu and clicking on the “Clean” option. It’s generally good to recompile a project completely and from the ground, often called cleaning, after making a major change to its dependencies; this is often done automatically, but not always. You’ll now note that you can import any of the packages in Guava, Commons, or whichever library you have imported, without having to download the entire API.

How Does This Change Things?

The JDK is not a small library as it is, it’s actually quite enormous; but if you have ever found yourself struggling to write an extension of a collection or a math utility that could be used in a wide variety of projects, those efforts will now be fewer and further between. You may check the javadocs for these APIs as readily as the javadocs for the JDK, and need not worry about increasing the disk footprint of your development environment or (much worse) your project.

Leave a comment

Posted by on November 22, 2014 in Programming


Tags: , , , , , , , , , , , ,