RSS

Texturing with Blender Cycles

Texturing with Blender Cycles

This is something that was bugging me for a little while. Cycles offers leaps and bounds in efficiency over the classic Blender rendering engine; but the down side is that we all need to learn how to use it in a new way.

I’m going to cover the simplest thing I can think of first, which is texturing. It’s targeted at people who have never textured a thing before in their lives; so if you’re already familiar with a section, feel free to skip ahead.

Blender is an excellent 3D modeling tool (and incidentally a decent video editor as a side effect), but it isn’t meant for 2D work, and that’s what textures either are or ultimately rely on. Additionally, most of us are GIMP, Adobe Photoshop, or Inkscape fans and aficionados to begin with, and an additional image editor just isn’t necessary. So, Blender lets you use your image editor of choice, and settles for handling proper mapping of images to UV coordinates.

UV Coordinates

So, what is a UV coordinate?

It’s safe to say that we’re all quite familiar with X, Y, Z, rho, phi, theta, and their related coordinate systems; but it’s bad form to refer to a coordinate of a texture via X and Y. Yes, they’re arguably two dimensional, but XYZ is used for points in space.

You don’t know, on the basis of a texture alone, where that point in space is going to be; nor should there be any nominal confusion about the location of a point on a texture, and the location of that point in 3D space (which, yeah, actually comes up a lot).

Additionally, as you come to understand textures better, you’ll discover two things. The first is that their coordinates are arguably six dimensional–you need to consider color and alpha, as well. But that’s the easy and boring part. The other note is that you can do a lot more with them than color a mesh; they’re just as useful for bump mapping, light mapping, expressing frequency density, even animation. With a proper set of tools, there really isn’t any limit to what you can do with a texture.

So, nominally, U is a horizontal coordinate of an element on a texture, and V is a vertical component.

As a side note, you will occasionally find texture coordinates under the names S and T. The difference is a matter of convention, but in most cases this refers to whether the vertical component–V or T–moves “upward” or “downward”. The V axis generally faces “down” the image, the T axis faces “up”. However, if you understand why “up” and “down” are in quotation marks here, and recognize their arbitrary nature, then you already know how silly this can be.

Suffice it to say that they are a respectable alternative, a flag that you might need to do some rotating or inverting of your image to get it to behave, and in the rare case of needing third and fourth position components, S and T are preferred.

Unwrapping a Mesh

I’m skipping over the techniques of creating a 3D shape (mesh, here) in Blender, as it’s somewhat tangential to the subject.

Remember to use the Cycles rendering engine; as it is all quite different for the classic Blender rendering engine.

We’ll start with a basic cube; the starter file will work.

Drag from the top right corner of your 3D view, until you have a new sub window. From its bottom left menu, select “UV/Image Editor”. What you have will initially be fairly boring; a simple rectangle in the middle of the cell.

This is the direct view of your texture, in UV coordinates. Every face of your shape (in this case, the cube) samples its coloring from a position within the UV window. Of course, your cube isn’t even textured yet, so we’ll get started with something called unwrapping.

Unwrapping

Remember when you were in grade school, and you learned to create a cube from a simple piece of paper? This classic drawing, a pair of scissors, and a little tape?

Cube

That was basically your first UV unwrap.

If you wanted to, you could have sketched a texture onto the image, and presuming you knew which edges were going to wrap to which, you could “texture” that paper cube.

UV unwrapping really isn’t any different than that; save that it’s digital, and with the Cycles material editor you can do much cooler things with it.

To unwrap, go to your 3D view, switch to edit mode, and hit “U”. A menu will pop up.

blender_Unwrap_menu

These are all different methods of UV unwrapping. I’m literally not even familiar with every method by which this can be done–it extends well beyond the menu–but I am going to go over some of my favorites.

Smart UV Project

This will basically unwrap your shape so that you have every face visible.

blender_Smart_UV_Project

It is particularly helpful for simpler meshes, like our cube, or some UV spheres. Every modification made to each square’s (or polygon’s) image will be “painted” onto your mesh.

It’s easy to see that this default unfolding is quite different from the example above, which also works; unfortunately if you tried to cut a cube out of a flat piece of paper like this, I imagine you had quite a bit of trouble folding it. (Cycles couldn’t care less.)

You’ll notice that the outline shape is styled exactly like a mesh; and in fact, you can grab and drag every point in it. Once a texture is assigned to it, any modification of a particular outline will alter the nature of the image painted onto the corresponding face; you basically have free reign with your texture.

This is true because Blender uses a technique known as lerping (or linear interpolating, for long,) to determine the appropriate color for each pixel of your render. It basically lets you find a happy medium between one color and another, among other things. 3D programmers do it roughly as often as they click the mouse, so clearly it’s useful and important.

However, a complete unwrap can be nightmare for 2D artists, especially when it comes to complex meshes (like, say, the human body). For that, we have other methods.

Project from View

…and Project from View (Bounds).

Try hitting U again over the 3D view in edit mode, and this time, select Project from View. You’ll notice that the point map projected over the texture is a literal copying of the camera view (in this case, that means your view) of the mesh, flattened out. This has drawbacks, but it can be a godsend to 2D artists.

Ever see one of those paintings on a street or sidewalk, which have an absolutely Hongkiat 3D Street Art? Then, as you walk around or look at it from another angle, the depth disappears? That’s analogous to Project from View.

On exporting (which I’ll get to in a second), an artist can paint/draw/render whatever they feel the character should look like, from that angle. They will get a blindingly realistic portrayal, from that angle, in the final render. The drawback is that the projection has to stretch (from lerping) from increasingly more radical angles, and if you aren’t careful, this can kill the magic in the final render.

The other thing worth noting is that, by default, the texture painting punches through to the other side of the object as well; but that isn’t as much of a problem as you might think. Most of your edit-mode vertex manipulation tools also work on the UV editor; and you also have the ability to manipulate the texture further in the Node editor, which we’ll cover later on.

The only difference that Project from View (Bounds) makes is that the image will be stretched out to cover the entire bounds of the UV image; which is usually a good thing. Wasted space is typically still kept, and it isn’t helpful in the end. However, if memory is a concern, then it can be a good idea to keep the projection shrunk down to a minimum and simply crop your image accordingly.

Exporting the UV Image

Under the UV/Image Editor’s UV menu, there is an option, titled Export UV Layout on Blender 2.78. This will save your UV layout–inclusive of the vertices and segments–as a 2D image.

The default is to save as a PNG. However in the bottom left you’ll see that SVG and EPS (encapsulated postscript) are also available. That should be handy for all you vector-graphics jockeys.

Go ahead and export your cube mapping. Open it in your favorite graphics editor.

You’ll find that all of the edges and vertices have been imprinted on the image; it is absolutely ready for you to paint a texture.

I can’t offer many texturing tips here–it’s too far from the core focus of the posting. However, I might in the future.

Make your texture and save it.

Importing the UV Image

This is where we finally make the magic happen.

There are effectively two parts to applying a texture to a mesh. The first is in the Properties pane under the texture panel.

Blender_Texture_Pane

This is all assuming that you still have your cube selected. You’ll note the square image at the top of the panel, which has the checkerboard pattern on it? That’s texturing.

You’ll also notice the ubiquitous open image box, which I had my mouse over in this instance. Click on that, and guide it to your texture file. There are a number of options beneath it, but the most important two are Color Space, and Bounds Handling. The others can certainly be helpful, but they’re a bit beyond the scope of this tutorial. (I recommend that you either play with them until you’ve figured them out, like I would; or find a more specific tutorial.)

Color Space

This simply informs Blender of whether you’re using a colored image, or an image of non-colored data. More than anything, this affects the way light plays on the surface of it.

I’m bringing it up mostly because the average uninformed user thinks that non-color-data means black-and-white; which it doesn’t. (Try it out and see.) By default, for color data, Cycles converts the traditional sRGB color space to another color space called rec709, a linear colorspace. If this was to be used for texturing, then you probably wouldn’t want that. Non-color data prevents this transition. If you look at the result of the exchange in a render, you’ll find that non-color data lacks a certain feeling of depth.

Bounds Handling

This is basically how your texture behaves when the polytope reaches the edge of it. The options are clip, extend, and repeat.

Clip basically means stop rendering texture there, and return to the default for out-of-bounds points. This can be quite handy for, say, decals.

Extend means stretch over the entire scope of the mesh.

Repeat, the default, means start over from the opposing edge of the texture.

Materials

You might notice, if you skipped ahead and tried rendering, that your mesh doesn’t have a texture yet. Well, that’s because we haven’t told it what to do with that image. As I said earlier, there are countless possibilities.

Go to your materials tab–that’s the one right next to texture, with the checkered sphere on it–and add a new material. Enable nodes with the giant button under surface, and for the moment, choose “Diffuse BSDF” under the shader menu.

Remember that corner that you dragged to create the new cell in the blender window? We’re going to do it again; this time, I suggest you do it from the UV image editor cell’s top right corner, to the left. You’ll get another cell. Switch it (via the bottom left menu) to Node Editor.

This is where a lot of the changes with cycles, as far as process goes, came into play. You can build your shaders with a spaghetti graph. Many of the controls from the 3D view should still be available. Hover over an empty spot, hit Shift+A, and select Texture and Image Texture.

Clock on the image icon on the left of the image texture node, and you’ll get a list of available (loaded) images. Pick out your texture image.

Drag from the yellow circle on the right of Image Texture to the yellow circle on the left of Diffuse BSDF; and the color will be mapped to the image texture.

However, your cube just went black, didn’t it. WTH, right? Not so much; the problem is that, like I said earlier, images can also be used for animations; which include panning and scaling. So, it needs to know what UV coordinate it’s generating a color for.

Shift+A in an exposed area and select Input, and UV Map. Drag from the innocuous-looking UV Node blue circle on the right of the UV Map node, to the blue circle on the left of Image Texture (labeled “Vector”).

Blender_Node_Editor

Now you’re ready to go. Make sure your lighting is right (I suggest an area lamp tilted to face the cube, tuned to around 2000 in strength), hit F12 to render an image, and observe your beautiful textured cube.

textured_cube

 
Leave a comment

Posted by on April 20, 2017 in 3D Modelling

 

Tags: , , , , , ,

Mint-Plum Sauce Lamb Chops

I gotta admit off hand, most of this recipe is about the plum sauce.

Mint-Plum Sauce

Ingredients:

  • 6 plums (no need to immediately pit)
  • shallow water
  • 1 pkg. fresh mint leaves
  • 2-3 T. honey
  • ¼ c. sherry wine
  • 1-2 t. ground ginger
  • 2 small cloves garlic, finely minced

Rinse six plums, and set them in shallow water in a saucepan (should rise no higher than an inch and a half). Drop in one sprig of mint leaves. Start burner, drop to medium to medium high, and wait for simmer. As plums cook down, stir periodically to prevent skin (which will simmer off) from sticking to the bottom of the pan. As plums soften, open with slight pressure from a set of tongs and remove pit, then dispose. (Remember that you will have a total of six pits!)

Once plums have cooked down into a homogeneous purple substance, add remaining mint leaves, then stir. Allow mint leaves to cook into the fruit sauce. Add honey, stir, and allow to cook further. Add quarter cup sherry wine (which, remember, will slightly sweeten the sauce) and bring back to a simmer. Turn to low, add ground ginger. Peel garlic, crush under a knife blade, and finely mince it; then add that to the sauce; simmer for no less than five minutes more. (Get the flavor of the garlic, which is also sweet but mildly pungent, cooked thoroughly into the bulk of the sauce.) Turn off burner, let cool.

Suggest serving over grilled lamb, with a side of quinoa and briefly steamed kale (2-6 minutes, to taste).

lamb with plum sauce—blog ready

 
Leave a comment

Posted by on March 14, 2017 in Recipes, Uncategorized

 

Tags: , , , , , , , , , , , , , , ,

Effective OpenAL with LWJGL 3

Effective OpenAL with LWJGL 3

Jesus Bloody Christ it’s been a while.

So, a lot of you are likely interested in developing on Java with LWJGL3 instead of LWJGL 2.9.*; as you should be. LWJGL3 has support for a lot of modern industry trends with older versions did not; such as multi-monitor support without back flipping through flaming hoops, or basically anything involving GLFW. It’s still in beta, I know, but it’s a solid piece of work and the team on it is dedicated enough to make it a reliable and standing dependency for modern projects.

Except for every now and then, when it happens to be missing some minor things. Or, more importantly, when there’s a dearth of documentation or tutorials on a new trick you’re pulling.

I can contribute, at least in part, to both of those.

OpenAL is the audio world’s equivalent to OpenGL; it’s a sophisticated and sleek interface to sound hardware. Many common effects and utilities, such as 3D sound, are built into it directly; and it interfaces sublimely with code already designed for OpenGL. Additionally, it’s also a very tight interface that does not take long at all to learn.

In the past, I would suggest using JavaSound for Java game audio, which is also a tight API, but it lacks these features. Most major audio filters have to be built into it rather directly and often by your own hand; and there’s no official guarantee of hardware optimization. However, what LWJGL3’s OpenAL interface now lacks can easily be supported by readily-present JavaSound features; such as the audio system’s file loader.

This entry is on, step by step, how one would do such a thing.

Let’s start with a basic framework. I’ve tried to keep a balance between minimal dependencies and staying on-topic, so I’ll suggest that you have both LWJGL3 (most recent version, preferably), and Apache Commons IO, as dependency libraries.

class Lesson {
    public Lesson() throws Exception {
        //Start by acquiring the default device
        long device = ALC10.alcOpenDevice((ByteBuffer)null);

        //Create a handle for the device capabilities, as well.
        ALCCapabilities deviceCaps = ALC.createCapabilities(device);
        // Create context (often already present, but here, necessary)
        IntBuffer contextAttribList = BufferUtils.createIntBuffer(16);

        // Note the manner in which parameters are provided to OpenAL...
        contextAttribList.put(ALC_REFRESH);
        contextAttribList.put(60);

        contextAttribList.put(ALC_SYNC);
        contextAttribList.put(ALC_FALSE);

        // Don't worry about this for now; deals with effects count
        contextAttribList.put(ALC_MAX_AUXILIARY_SENDS);
        contextAttribList.put(2);

        contextAttribList.put(0);
        contextAttribList.flip();
        
        //create the context with the provided attributes
        long newContext = ALC10.alcCreateContext(device, contextAttribList);
        
        if(!ALC10.alcMakeContextCurrent(newContext)) {
            throw new Exception("Failed to make context current");
        }
        
        AL.createCapabilities(deviceCaps);
        
        
        //define listener
        AL10.alListener3f(AL10.AL_VELOCITY, 0f, 0f, 0f);
        AL10.alListener3f(AL10.AL_ORIENTATION, 0f, 0f, -1f);
        
        
        IntBuffer buffer = BufferUtils.createIntBuffer(1);
        AL10.alGenBuffers(buffer);
        
        //We'll get to this next!
        long time = createBufferData(buffer.get(0));
        
        //Define a source
        int source = AL10.alGenSources();
        AL10.alSourcei(source, AL10.AL_BUFFER, buffer.get(0));
        AL10.alSource3f(source, AL10.AL_POSITION, 0f, 0f, 0f);
        AL10.alSource3f(source, AL10.AL_VELOCITY, 0f, 0f, 0f);
        
        //fun stuff
        AL10.alSourcef(source, AL10.AL_PITCH, 1);
        AL10.alSourcef(source, AL10.AL_GAIN, 1f);
        AL10.alSourcei(source, AL10.AL_LOOPING, AL10.AL_FALSE);
        
        //Trigger the source to play its sound
        AL10.alSourcePlay(source);
        
        try {
            Thread.sleep(time); //Wait for the sound to finish
        } catch(InterruptedException ex) {}
        
        AL10.alSourceStop(source); //Demand that the sound stop
        
        //and finally, clean up
        AL10.alDeleteSources(source);
        

    }

}

The beginning is not unlike the creation of an OpenGL interface; you need to define an OpenAL context and make it current for the thread. Passing a null byte buffer to alcOpenDevice will provide you with the default device, which is usually what you’re after. (It is actually possible to interface with, say, multiple sets of speakers selectively, or the headphones instead of the speaker system, if you would like; but that’s another topic.)

Much like graphics devices, every audio device has its own set of capabilities. We’ll want a handle on those, as well. It’s safe to say that if a speaker can do it, OpenAL is capable of it; but not all speakers (or microphones) are created the same.

After this, OpenAL will want to know something of what we’re expecting it to manage. Note that it’s all passed over as a solid int buffer. We’re providing it with a notion of what features it will need to enact, or at least emulate; with a sequence of identifiers followed by parameters, terminated with a null. I haven’t begun to touch all that is possible here, but this attribute list should be enough for most uses.

After that, create the context, make it current, check to see that it didn’t blow up in your face, and register the capabilities. (Feel free to play with this once you’ve got the initial example going.)

So, before I get to the part where JavaSound comes in, let’s start with the nature of how OpenAL views sound. Sound, in its view, has three components: a listener, a source, and an actual buffer.

The listener would be either you or your program user; however, the program would want to know a little about your properties. Are you located something to the left or right? Are you moving (or virtually moving)? I usually set this first as it is likely to be constant across all sounds (kind of like a graphics context).

Next, we have a method of my own creation that builds and registers the audio file. Forgive me for the delay, but that’s where JavaSound’s features (in the core JKD) come in, and I’m deferring it to later in the discussion. You will note that the audio buffers have to be registered with OpenAL; as it needs to prepare for the data. There’s a solid chance that you will have sound-processor-local memory, much like graphics memory, and it will have to be managed accordingly by that processor before you can chuck any data at it.

Let’s look at that audio buffer creator.

     private long createBufferData(int p) throws UnsupportedAudioFileException, IOException {
        //shortcut finals:
        final int MONO = 1, STEREO = 2;
        
        AudioInputStream stream = null;
        stream = AudioSystem.getAudioInputStream(Lesson3.class.getResource("I Can Change — LCD Soundsystem.wav"));
        
        AudioFormat format = stream.getFormat();
        if(format.isBigEndian()) throw new UnsupportedAudioFileException("Can't handle Big Endian formats yet");
        
        //load stream into byte buffer
        int openALFormat = -1;
        switch(format.getChannels()) {
            case MONO:
                switch(format.getSampleSizeInBits()) {
                    case 8:
                        openALFormat = AL10.AL_FORMAT_MONO8;
                        break;
                    case 16:
                        openALFormat = AL10.AL_FORMAT_MONO16;
                        break;
                }
                break;
            case STEREO:
                switch(format.getSampleSizeInBits()) {
                    case 8:
                        openALFormat = AL10.AL_FORMAT_STEREO8;
                        break;
                    case 16:
                        openALFormat = AL10.AL_FORMAT_STEREO16;
                        break;
                }
                break;
        }
        
        //load data into a byte buffer
        //I've elected to use IOUtils from Apache Commons here, but the core
        //notion is to load the entire stream into the byte array--you can
        //do this however you would like.
        byte[] b = IOUtils.toByteArray(stream);
        ByteBuffer data = BufferUtils.createByteBuffer(b.length).put(b);
        data.flip();
        
        //load audio data into appropriate system space....
        AL10.alBufferData(p, openALFormat, data, (int)format.getSampleRate());
        
        //and return the rough notion of length for the audio stream!
        return (long)(1000f * stream.getFrameLength() / format.getFrameRate());
    }

We’re hijacking a lot of the older JavaSound API utilities for this. OpenAL, much like OpenGL, isn’t really “open”, nor is it technically a “library”. So, having something around for handling audio data is helpful, and why bother writing our own when it’s already built into the JDK?

For JavaSound, you work with either Clips, or (more frequently) AudioInputStreams. You can read most audio file formats directly via AudioSystem.getAudioInputStream(…); in this case, I’ve elected to use a WAV format of LCD Soundsystem’s “I Can Change”, because James Murphy is a god damned genius. However, you can use anything you would like; to get it to work with this just drop it in the same source directory.

Next up, grab the format of the sound with AudioStream.getFormat(). This will provide you with a lot of valuable information about the stream. If it’s a big endian stream (which most wave files are not), you might need to convert it to little endian or make proper alterations to OpenAL. I’ve glossed over this, as endian-ness is not really a part of the tutorial and there are plenty of good byte-management tutorials out there.

I’ve elected to use format to check for the mono/stereo status (more are actually possible), and whether the sound is 8-bit or more frequently 16-bit. (Technically 32- or even 64- bit sound is possible; but there is actually a resolution to the cochlea of the ear, and you’re not going to bump into that outside of labs with very funny looking equipment. Even Blu-ray doesn’t go above 24-bit. Seriously, there’s generally just no point in bothering.)

Afterward, we load the stream into a byte array (I’m using IOUtils for this for brevity, but you can do it however you like), and the byte array into a ByteBuffer. Flip the buffer, and punch it over to OpenAL, which will take care of the rest of the work with it. Afterwards, we will eventually need the length of the audio stream, so calculate it as shown and send it back to the calling method.

After the buffer’s been created and the length of it is known, we’ve got to create a source for it! This is where most of the cooler built-in effects show up. alGenSources() creates a framework for the source; alSourcei(source, AL10.AL_BUFFER, buffer.get(0)) ties it to the sound buffer. You’ll also see that I set up AL_GAIN and AL_PITCH, which are fun to play with.

You’re almost done!

To actually play the buffer, you use the source. alSourcePlay(source) starts it. After that, I have the Thread sleep for the calculated length of the sound, just so we have time to hear it. At the end, I call alSourceStop(source) to demand an end to the source.

Lastly, I delete all sources. You might also want to delete devices, if you’ve done anything silly with them; this is very low-level access. You now have everything you need to load audio into your games and programs, and if you happen to bump into an SPI for a preferred format, it will now also be enough to get you going on OpenAL as well.

 
Leave a comment

Posted by on July 4, 2016 in Java, Programming

 

Tags: , , , , , , , , , ,

A Studio, A Temple

I have a beautiful place carved out of the emptiness that was before. Two desks, one metal, the other dark cherry, formed into an “L”, my desktop on one and my Raspberry Pis, electronics, and embedded systems on the other. A space for my coffee, two surge protectors, an X-Box controller for the times when a mouse doesn’t do the job. A top-notch soldering pen, poised on the glass desk between my two monitors, unplugged and with plenty of space for safety of course.

This place used to be a living room, which we did little living in. I’ve adopted it, and adapted it, into a workspace. The thing about a studio is that it is, by definition, a temple to one’s mind. Nothing goes here that I wouldn’t have bouncing around in my head, whilst I’m trying to actually get something done. This place is my mind space.

I have a whiteboard on the wall now; four feet by three feet, with a complete collection of four-color markers (two black, one each in red, green, and blue) and an eraser, with a cleaning spray. I do use it. I’ve been mapping my thoughts to it for some time. It’s good when a paper pad (which every engineer should, still, always have) just isn’t enough. It doesn’t have the advantage of graph paper, but some occasions require something more than a note. Right now, I’m weighing the advantages and disadvantages between using LWJGL or JavaFX for a programming project. I would not have found it to be as easy without the marker board.

The floor bothers me. It’s an awful blue carpet, one which may never have been that attractive and hasn’t gotten any better with age. I’m hoping to replace it with some stone tile, something in a nice tan color. Not just linoleum, nothing too cheap. That would be reckless and self-sabotaging; I can wait to afford it. A nice wheat color would blend well with the furniture. The walls are a subtle greenish white, hard to tell in the lamplight late in the evening. I might paint them, it wouldn’t take long. Something bright, nothing that would contrast with the flags and the artwork hanging on them, or the statues and idols poised throughout the shelves.

When I enter this space, I become someone new; someone I need to be. I have OpenGL/CL/AL projects going on the desktop, bioelectrics going on the steel desk, and little room for doubt or distraction. My office used to be a plastic desk in the kitchen, where I would pound out every ounce of inspiration my mind had until I ran out of strength. I’m stronger in here. This place is, indeed, a sacred one to me.

 
Leave a comment

Posted by on January 26, 2016 in Innovation, State of the Moment

 

Tags: , , , , , ,

Never Stop Running

Never Stop Running

You know that burning feeling you get, in the center of your chest, your very core, when you just need to get something magnificent done? Not just a frequent thing like doing laundry, or cleaning the house, but something life vindicating? Because I’ve got that lately. I’ve spent the past month taking care of all of the heat that the part-time is getting, purely for the sake of this; January should be just boring enough to finish everything off.

I say I’m a systems engineer, but generally only when I want to change subjects. The long answer is that I build machines that build universes. I have a degree that redefined what it means to be “hard-earned”; in the fields of Physics and Neuroscience. I’ve been programming since I was a tyke. I’ve been writing since I was ten years old. All of this ultimately accumulates toward the same end goal. The whole point of building simulators is to answer “what if”. Stories, games, the entertainment of the future; it’s all in systems and simulation. Everything is and always has been about that.

Back in the 1960s and 70s, before the personal computer was standing up and walking on two feet, aerospace companies like Boeing used to build tiny scale models of their aircraft before the actual prototype was ever constructed. The idea was, given that a specific part goes out and needs to be replaced on such-and-such an aeroplane, what are we going to have to pull way to get to it? What would be the cost model? If half of the aeroplane had to be pulled apart to get to a specific gearbox, then the lifetime of that gearbox might be the lifetime of the aeroplane. The design might be too expensive to fix.

Were these micro-models expensive? Absolutely. However, they were much cheaper than figuring this out only after the aircraft was built. They were worth every penny, and every replacement model was worth every penny in turn. I look at this chop-shop job, and I remind myself that. It’s my funding and my micro-model.

Yesterday, I finished off the better part of a detailed three dimensional collision detection system, with an outline to covering four dimensions if the need ever arises. It’s as modular and expandable as it can get. It was harder than it sounded, it was twice as much fun, and it’s completely self-validating. When I’m done with this, all I’ll need to worry about is penning, sculpting, composing, and storyboarding.

That, my friends, is the best Christmas present ever. Happy Solstice!

[Note: image is a rambled selfie with tonight’s desert, an orange chocolate mousse with raspberries and freshly whipped cream]

 
Leave a comment

Posted by on December 26, 2015 in State of the Moment, Uncategorized

 

Tags: , , , , , ,

Gallery

The Marvel DubSmash War Of SDCC 2015 – When Agents Collide, Everyone Wins!

The Marvel DubSmash War Of SDCC 2015 – When Agents Collide, Everyone Wins!

The Insightful Panda

This past Season on Agents Of SHIELD and Agent Carter, we saw a lot of ‘almost’ Civil Wars: Coulson’s S.H.I.E.L.D. vs Gonzales’ S.H.I.E.L.D., Agent Carter vs the SSR. Thankfully those two resolved peacefully and it seemed like we’d have to wait for Captain America: Civil War to see the next battle. Well, we were wrong because a new war brewed this past weekend at San Diego Comic Con when all these Agents met up at the fists lip syncing flew! DubSmash War!

At approximately 8:08 PM on Jun 10th – sometime after the Marvel TV Panel – the challenge was issued by Clark Gregg (Agent Phil Coulson) and Chloe Bennet (Agent Skye/Daisy).

What was this strange new battlefield? Hayley and Atwell (Agent Carter) and James Darcy (Edwin Jarvis) were up for the challenge, but first wanted to practice a little…

… and then, officially accepted the challenge 2 hours later that…

View original post 407 more words

 
Leave a comment

Posted by on July 13, 2015 in Uncategorized

 

Google Deep Dream ruins food forever.

Giger rest in peace. He would have had so much fun with Deep Dream.

Ken Vermette

Google Deep Dream is an interesting piece of AI software which looks for patterns in pictures, much like humans may look for patterns in clouds. Deep Dream has been trained to find a few things, like eyes, animals, arches, pagodas, and the most fascinating part is that Deep Dream can also spit out what it “saw”. Then Google opened Deep Dream to the public and people started loading tonnes of images into the system, and when you combine food with Deep Dream it turns into the stuff of nightmares.

RUN NOW OR FOREVER RUIN FOOD FOREVER! Here’s pictures of food turned to ghoulish nightmare-fuel courtesy of Deep Dream;

Via Steve Kaiser Via Steve Kaiser

Nope. NOPE. Great start. Never eating takeout again. At least nothing bad can happen to the humble doughnut.

Duncan Nicoll, thank you. Via Ibitimes Duncan Nicoll, thank you. Via Ibitimes

GREAT. FANTASTIC. I didn’t like doughnuts anyway. ARE THOSE LEGS?

Ibitimes also had this. Spaghetti & nightmares.Ibitimes also had this. Spaghetti…

View original post 98 more words

 
Leave a comment

Posted by on July 11, 2015 in Uncategorized

 

Apache Commons DecompositionSolvers

Apache Commons DecompositionSolvers

Jesus it’s been too long since I got back to this!

Anyway, right, the

DecompositionSolver

Intro / Why-I-Need-To-Worry-About-This

Linear algebra exists for a reason; namely, us. Suppose we’re attempting to find the coordinate values, which under a certain transform, become a specific value. Let’s keep it simple and call it:

 x + 2y + 3z = 1
2x + 4y + 3z = 2
4x           = 3

As I’m sure you can remember from grade school, you have the same number of equations as unknowns, so it is almost certainly solvable. We just subtract two of the first equation from the second, four of the first equation form the third, four of the second from the third, one of the second from the first, and a quarter of the third from the first. Then we maybe divide the third by eight and the second by three, and presto,

x = 3/4
y = 1/8
z = 0

Unfortunately, as programmers, we both know that this is much easier done in practice than in theory; and when you’re automating  a task, a working theory is the only thing that really counts.

So, those of you who have already taken linear algebra (quite possibly all of you) may be familiar with a much easier way of representing this problem:

┌1 2 3┐┌x┐   ┌1┐
│2 4 3││y│ = │2│
└4 0 0┘└z┘   └3┘

A decomposition basically solves this, through a sequence of steps on both sides that reduces the original matrix to an identity matrix, while having the right-hand matrix undergo the same operations. This is commonly written as an augmented matrix, like so:

┌1 2 3│1┐
│2 4 3│2│
└4 0 0│3┘

Matrix reduction is a heck of a lot more straightforward than the nonsense I spouted a few paragraphs back, though going into its details is a bit off topic here. Our final matrix, after the reduction, looks like this:

┌1 0 0│3/4┐
│0 1 0│1/8│
└0 0 1│ 0 ┘

How Do We Do This in Java?

Not just Java, actually; this is specifically about the Apache Commons Math3 decomposition solver interface.

One of the tricks with reduction is that there are a lot of different, equally effective, ways to go about it; and like any other algorithm, the efficiency depends, in large part, on the initial state of your matrix. My personal favorite is the LU Decomposition. (Or, if you prefer a link that isn’t a video, look here.)

First I recommend making a Maven project out of your Java project, presuming that it isn’t already fitting that form factor. Afterwards, open up pom.xml, and add this:

<dependencies>
    <dependency>
        <groupId>org.apache.commons</groupId>
        <artifactId>commons-math3</artifactId>
        <version>3.5</version>
    </dependency>
</dependencies>

right after the close of the build tag. Your project is now pulling classes from across the internet, on Apache Commons Math3. Later on, you may want the version number to be a bit higher; for now I’m using version 3.5.

So, you’ll note that you have access to a host of new classes, all in some subpackage of org.apache.commons.math3. Import org.apache.commons.math3.linear.* into your class file.

We can solve the above problem by creating a RealMatrix of the initial matrix, potentially like so:

RealMatrix matrix = new Array2DRowRealMatrix(new double[][]{
    {1.0, 2.0, 3.0},
    {2.0, 4.0, 3.0},
    {4.0, 0.0, 0.0}
});

But don’t get me wrong, there are literally dozens of ways to create a RealMatrix.

Next, create a RealVector, describing the other side of the equation, perhaps like so:

RealVector vector = new ArrayRealVector(new double[]{
    1.0,
    2.0,
    3.0
});

We now have a matrix and vector representation of the two sides of our equation.

Working with RealMatrix and RealVector

If you’re an experienced programmer, you probably expect some kind of Command Pattern to show up next. It’s certainly what I would do, if I needed to duplicated the exact operations in the exact order on more than one piece of base data. Fortunately, something like it has already been implemented by Apache.

If you look up the Apache Commons Math3 javadocs, you’ll notice that while RealMatrix has a lot of handy operations, they generally just involve polling for data, not actually operating on it. Commons has made the wise move to encapsulate operations in their own classes, rather than just their own methods. There are many dozen other classes, such as MatrixUtils (remember that one!), which both generate and operate on RealMatrix and RealVector classes.

In this instance, turn to DecompositionSolver. It’s meant for tasks just like our own, and there are many subclasses. As I said, my preference is LUDecomposition, but that is only capable of handling square matrices. Since our matrix is square, that’s fine; in other cases when your matrix doesn’t fit the profile, look through EigenDecomposition, SingularValueDecomposition, or some other utility.

For LUDecomposition, we’ll want to do something like this:

DecompositionSolver solver = new LUDecomposition(matrix).getSolver();

The work has been done, as one initialization, LUDecomposition doesn’t just store the matrix as a property; it determines from it the exact sequence of operations necessary to turn it into an identity matrix.

Once you have your solver, you can get your final right-hand vector via:

solver.solve(vector);

which will provide you with:

┌3/4┐
│1/8│
└ 0 ┘

Final Source Code

Here’s a working example of how such a program might work.

 package oberlin.math3;

import java.io.*;
import java.util.*;

import org.apache.commons.math3.linear.*;

public class MatrixReducer {
    
    public static void main(String...args) {
        new MatrixReducer();
    }
    
    public MatrixReducer() {
        try(BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(System.out));
                Scanner scanner = new Scanner(System.in)) {
            writer.write("\nEnter first row of three numbers: ");
            writer.flush();
            
            RealVector vector1 = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});
            
            writer.write("\nEnter second row of three numbers: ");
            writer.flush();
            
            RealVector vector2 = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});

            writer.write("\nEnter third row of three numbers: ");
            writer.flush();
            
            RealVector vector3 = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});
            
            
            //create matrix
            RealMatrix matrix = MatrixUtils.createRealIdentityMatrix(3);
            matrix.setRowVector(0, vector1);
            matrix.setRowVector(1, vector2);
            matrix.setRowVector(2, vector3);
            
            //get other side
            writer.write("\nEnter vector on right side (3 entries):");
            writer.flush();
            
            RealVector vector = new ArrayRealVector(new double[]{scanner.nextDouble(), scanner.nextDouble(), scanner.nextDouble()});
            
            
            writer.write("Solving...");
            writer.flush();
            
            DecompositionSolver solver = new LUDecomposition(matrix).getSolver();
            matrix = solver.solve(matrix);
            vector = solver.solve(vector);
            
            writer.write("Solution: \n");
            writer.flush();
            
            writer.write("┌" + matrix.getEntry(0, 0) + " " + matrix.getEntry(0, 1) + " "
                    + matrix.getEntry(0, 2) + "┐┌x┐   ┌" + Double.toString(vector.getEntry(0)) + "┐\n");
            writer.write("│" + matrix.getEntry(1, 0) + " " + matrix.getEntry(1, 1) + " "
                    + matrix.getEntry(1, 2) + "││y│ = │" + Double.toString(vector.getEntry(1)) + "│\n");
            writer.write("└" + matrix.getEntry(2, 0) + " " + matrix.getEntry(2, 1) + " "
                    + matrix.getEntry(2, 2) + "┘└z┘   └" + Double.toString(vector.getEntry(2)) + "┘\n");
            
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}
 
Leave a comment

Posted by on June 30, 2015 in Java, Programming

 

Tags: , , , , , , , , , ,

Famous Works of Art Improved by Cats

 
Leave a comment

Posted by on April 5, 2015 in Uncategorized

 

Nothing’s Scarier than Imminent Success.

It’s been an interesting few weeks, but then, it’s also been an interesting few months. On the other hand, I could probably continue that line of thought, until the time span was so broad there was nothing left to hold it relative to.

The last few years have been very rough, but also fortuitous. I actually got my degree in Physics and Neuroscience, with the idea that I would someday work in neuroprosthetics. (Note to misguided Grammar Nazis: the subjects are capitalized as proper nouns when they are also unique department names. You’re welcome.) Unfortunately I also graduated in 2006, the beginning of a very dark time, and have yet to hold a physics job of any kind. The closest I’ve gotten to neuroscience was a brief stint as a substitute teacher for some elementary school special needs classes.

I basically jumped, for lack of any option, right back into programming when I got out. This isn’t to say that I haven’t had quite a few jobs that were once thought to be meant for teenagers along the way. At least five pharmacies, a hardware store, a few groceries. I am in the woeful state of having a mother who has a complete detachment from the workforce that I am confronted with; a father who really helps but has limited understanding of how one gets published today, and the limitations placed on the amount of money that I can actually make; and a brother who is nought but treacherous.

To give you an idea of what I mean by “treacherous”, I worked for him for eight months after he begged me for help on a cloud computing company that he wanted to put together, deferring applying to grad school. My sister and mother had already taken the hint and ceased contact with him, I naïvely thought that perhaps I could reunite my family through further contact and arbitration. He provided a long list of people who were supposedly behind him on it. In eight months, I could get direct contact with none of those people. Additionally, I discovered that his software patent was entirely fabricated. At the end of those eight months, he “terminated” me, I never saw a dime of the promised pay.

I quietly took note of the extent of his evil, hung up the phone, and disowned him. In the mean time, I took the GREs and got within the uncertainty limit of a perfect score on the math section. I applied to MIT, and had everything going for me, until I discovered that during that time, during my work for this play-house company, along with a not-yet-mentioned struggle with neuropathy from a medication I was on, my debt with my student loans reached a critical point. I had defaulted. My school would no longer provide transcripts. The days got even darker.

Every darkness does have its dawn, if you’re willing to work and wait long enough. I hooked up with the wonderful Sparo Vigil, here in New Mexico, and grabbed a part time job working for Home Depot. It wasn’t a high point of my life, but it made me enough to get along, and even pay back a small part of my student loans. I continued to program, working for myself, and established a sequence of frameworks to make the job easier. My cumulative experience in education and software led me to game development, and my training in the scientific method showed me a path to creating an ideal environment, and sequence of products built in that environment, with a legitimate positive change brought about in the world.

This framework is together; it works beautifully. Today, I’m going to use it for the first time, and create a finished (if not market-worthy) product. It will lift me out of the bog that I am in. I have to admit, that’s a little frightening.

If things had continued to be low, and dirty, and hard, then at least I would have a response for it. I would have a plan for how to move forward, to keep my head above the water. My ideals have been set much higher than that, though. I left the Home Depot job some time ago, maybe a year; I have Sparo to thank for keeping me afloat while I worked on the framework and my writing. Things are about to be much better, and I’m not sure how to take that.

It’s not that every little detail is finished, it’s that I’m at version 1.0. I have a system by which I can rapidly go from idea to product, possibly in an afternoon (though early on, I imagine the exception will be the rule). The rest is going to be an extensive amount of linear algebra and differential geometry, digital signal processing techniques and their implementation, modernizing of my design patterns, and writing a bridge to the Steamworks API. After that, publishing, and marketing. Later on, probably expansion of the framework interface into other languages, like Python, Ruby, and Scala. I’m looking forward to all of these things.

I’m looking forward.

I had almost forgotten how thrilling that can be.

 
Leave a comment

Posted by on March 30, 2015 in State of the Moment

 

Tags: , , ,