RSS

Author Archives: Mick Oberlin

Texturing with Blender Cycles

Texturing with Blender Cycles

This is something that was bugging me for a little while. Cycles offers leaps and bounds in efficiency over the classic Blender rendering engine; but the down side is that we all need to learn how to use it in a new way.

I’m going to cover the simplest thing I can think of first, which is texturing. It’s targeted at people who have never textured a thing before in their lives; so if you’re already familiar with a section, feel free to skip ahead.

Blender is an excellent 3D modeling tool (and incidentally a decent video editor as a side effect), but it isn’t meant for 2D work, and that’s what textures either are or ultimately rely on. Additionally, most of us are GIMP, Adobe Photoshop, or Inkscape fans and aficionados to begin with, and an additional image editor just isn’t necessary. So, Blender lets you use your image editor of choice, and settles for handling proper mapping of images to UV coordinates.

UV Coordinates

So, what is a UV coordinate?

It’s safe to say that we’re all quite familiar with X, Y, Z, rho, phi, theta, and their related coordinate systems; but it’s bad form to refer to a coordinate of a texture via X and Y. Yes, they’re arguably two dimensional, but XYZ is used for points in space.

You don’t know, on the basis of a texture alone, where that point in space is going to be; nor should there be any nominal confusion about the location of a point on a texture, and the location of that point in 3D space (which, yeah, actually comes up a lot).

Additionally, as you come to understand textures better, you’ll discover two things. The first is that their coordinates are arguably six dimensional–you need to consider color and alpha, as well. But that’s the easy and boring part. The other note is that you can do a lot more with them than color a mesh; they’re just as useful for bump mapping, light mapping, expressing frequency density, even animation. With a proper set of tools, there really isn’t any limit to what you can do with a texture.

So, nominally, U is a horizontal coordinate of an element on a texture, and V is a vertical component.

As a side note, you will occasionally find texture coordinates under the names S and T. The difference is a matter of convention, but in most cases this refers to whether the vertical component–V or T–moves “upward” or “downward”. The V axis generally faces “down” the image, the T axis faces “up”. However, if you understand why “up” and “down” are in quotation marks here, and recognize their arbitrary nature, then you already know how silly this can be.

Suffice it to say that they are a respectable alternative, a flag that you might need to do some rotating or inverting of your image to get it to behave, and in the rare case of needing third and fourth position components, S and T are preferred.

Unwrapping a Mesh

I’m skipping over the techniques of creating a 3D shape (mesh, here) in Blender, as it’s somewhat tangential to the subject.

Remember to use the Cycles rendering engine; as it is all quite different for the classic Blender rendering engine.

We’ll start with a basic cube; the starter file will work.

Drag from the top right corner of your 3D view, until you have a new sub window. From its bottom left menu, select “UV/Image Editor”. What you have will initially be fairly boring; a simple rectangle in the middle of the cell.

This is the direct view of your texture, in UV coordinates. Every face of your shape (in this case, the cube) samples its coloring from a position within the UV window. Of course, your cube isn’t even textured yet, so we’ll get started with something called unwrapping.

Unwrapping

Remember when you were in grade school, and you learned to create a cube from a simple piece of paper? This classic drawing, a pair of scissors, and a little tape?

Cube

That was basically your first UV unwrap.

If you wanted to, you could have sketched a texture onto the image, and presuming you knew which edges were going to wrap to which, you could “texture” that paper cube.

UV unwrapping really isn’t any different than that; save that it’s digital, and with the Cycles material editor you can do much cooler things with it.

To unwrap, go to your 3D view, switch to edit mode, and hit “U”. A menu will pop up.

blender_Unwrap_menu

These are all different methods of UV unwrapping. I’m literally not even familiar with every method by which this can be done–it extends well beyond the menu–but I am going to go over some of my favorites.

Smart UV Project

This will basically unwrap your shape so that you have every face visible.

blender_Smart_UV_Project

It is particularly helpful for simpler meshes, like our cube, or some UV spheres. Every modification made to each square’s (or polygon’s) image will be “painted” onto your mesh.

It’s easy to see that this default unfolding is quite different from the example above, which also works; unfortunately if you tried to cut a cube out of a flat piece of paper like this, I imagine you had quite a bit of trouble folding it. (Cycles couldn’t care less.)

You’ll notice that the outline shape is styled exactly like a mesh; and in fact, you can grab and drag every point in it. Once a texture is assigned to it, any modification of a particular outline will alter the nature of the image painted onto the corresponding face; you basically have free reign with your texture.

This is true because Blender uses a technique known as lerping (or linear interpolating, for long,) to determine the appropriate color for each pixel of your render. It basically lets you find a happy medium between one color and another, among other things. 3D programmers do it roughly as often as they click the mouse, so clearly it’s useful and important.

However, a complete unwrap can be nightmare for 2D artists, especially when it comes to complex meshes (like, say, the human body). For that, we have other methods.

Project from View

…and Project from View (Bounds).

Try hitting U again over the 3D view in edit mode, and this time, select Project from View. You’ll notice that the point map projected over the texture is a literal copying of the camera view (in this case, that means your view) of the mesh, flattened out. This has drawbacks, but it can be a godsend to 2D artists.

Ever see one of those paintings on a street or sidewalk, which have an absolutely Hongkiat 3D Street Art? Then, as you walk around or look at it from another angle, the depth disappears? That’s analogous to Project from View.

On exporting (which I’ll get to in a second), an artist can paint/draw/render whatever they feel the character should look like, from that angle. They will get a blindingly realistic portrayal, from that angle, in the final render. The drawback is that the projection has to stretch (from lerping) from increasingly more radical angles, and if you aren’t careful, this can kill the magic in the final render.

The other thing worth noting is that, by default, the texture painting punches through to the other side of the object as well; but that isn’t as much of a problem as you might think. Most of your edit-mode vertex manipulation tools also work on the UV editor; and you also have the ability to manipulate the texture further in the Node editor, which we’ll cover later on.

The only difference that Project from View (Bounds) makes is that the image will be stretched out to cover the entire bounds of the UV image; which is usually a good thing. Wasted space is typically still kept, and it isn’t helpful in the end. However, if memory is a concern, then it can be a good idea to keep the projection shrunk down to a minimum and simply crop your image accordingly.

Exporting the UV Image

Under the UV/Image Editor’s UV menu, there is an option, titled Export UV Layout on Blender 2.78. This will save your UV layout–inclusive of the vertices and segments–as a 2D image.

The default is to save as a PNG. However in the bottom left you’ll see that SVG and EPS (encapsulated postscript) are also available. That should be handy for all you vector-graphics jockeys.

Go ahead and export your cube mapping. Open it in your favorite graphics editor.

You’ll find that all of the edges and vertices have been imprinted on the image; it is absolutely ready for you to paint a texture.

I can’t offer many texturing tips here–it’s too far from the core focus of the posting. However, I might in the future.

Make your texture and save it.

Importing the UV Image

This is where we finally make the magic happen.

There are effectively two parts to applying a texture to a mesh. The first is in the Properties pane under the texture panel.

Blender_Texture_Pane

This is all assuming that you still have your cube selected. You’ll note the square image at the top of the panel, which has the checkerboard pattern on it? That’s texturing.

You’ll also notice the ubiquitous open image box, which I had my mouse over in this instance. Click on that, and guide it to your texture file. There are a number of options beneath it, but the most important two are Color Space, and Bounds Handling. The others can certainly be helpful, but they’re a bit beyond the scope of this tutorial. (I recommend that you either play with them until you’ve figured them out, like I would; or find a more specific tutorial.)

Color Space

This simply informs Blender of whether you’re using a colored image, or an image of non-colored data. More than anything, this affects the way light plays on the surface of it.

I’m bringing it up mostly because the average uninformed user thinks that non-color-data means black-and-white; which it doesn’t. (Try it out and see.) By default, for color data, Cycles converts the traditional sRGB color space to another color space called rec709, a linear colorspace. If this was to be used for texturing, then you probably wouldn’t want that. Non-color data prevents this transition. If you look at the result of the exchange in a render, you’ll find that non-color data lacks a certain feeling of depth.

Bounds Handling

This is basically how your texture behaves when the polytope reaches the edge of it. The options are clip, extend, and repeat.

Clip basically means stop rendering texture there, and return to the default for out-of-bounds points. This can be quite handy for, say, decals.

Extend means stretch over the entire scope of the mesh.

Repeat, the default, means start over from the opposing edge of the texture.

Materials

You might notice, if you skipped ahead and tried rendering, that your mesh doesn’t have a texture yet. Well, that’s because we haven’t told it what to do with that image. As I said earlier, there are countless possibilities.

Go to your materials tab–that’s the one right next to texture, with the checkered sphere on it–and add a new material. Enable nodes with the giant button under surface, and for the moment, choose “Diffuse BSDF” under the shader menu.

Remember that corner that you dragged to create the new cell in the blender window? We’re going to do it again; this time, I suggest you do it from the UV image editor cell’s top right corner, to the left. You’ll get another cell. Switch it (via the bottom left menu) to Node Editor.

This is where a lot of the changes with cycles, as far as process goes, came into play. You can build your shaders with a spaghetti graph. Many of the controls from the 3D view should still be available. Hover over an empty spot, hit Shift+A, and select Texture and Image Texture.

Clock on the image icon on the left of the image texture node, and you’ll get a list of available (loaded) images. Pick out your texture image.

Drag from the yellow circle on the right of Image Texture to the yellow circle on the left of Diffuse BSDF; and the color will be mapped to the image texture.

However, your cube just went black, didn’t it. WTH, right? Not so much; the problem is that, like I said earlier, images can also be used for animations; which include panning and scaling. So, it needs to know what UV coordinate it’s generating a color for.

Shift+A in an exposed area and select Input, and UV Map. Drag from the innocuous-looking UV Node blue circle on the right of the UV Map node, to the blue circle on the left of Image Texture (labeled “Vector”).

Blender_Node_Editor

Now you’re ready to go. Make sure your lighting is right (I suggest an area lamp tilted to face the cube, tuned to around 2000 in strength), hit F12 to render an image, and observe your beautiful textured cube.

textured_cube

Advertisements
 
Leave a comment

Posted by on April 20, 2017 in 3D Modelling

 

Tags: , , , , , ,

Mint-Plum Sauce Lamb Chops

I gotta admit off hand, most of this recipe is about the plum sauce.

Mint-Plum Sauce

Ingredients:

  • 6 plums (no need to immediately pit)
  • shallow water
  • 1 pkg. fresh mint leaves
  • 2-3 T. honey
  • ¼ c. sherry wine
  • 1-2 t. ground ginger
  • 2 small cloves garlic, finely minced

Rinse six plums, and set them in shallow water in a saucepan (should rise no higher than an inch and a half). Drop in one sprig of mint leaves. Start burner, drop to medium to medium high, and wait for simmer. As plums cook down, stir periodically to prevent skin (which will simmer off) from sticking to the bottom of the pan. As plums soften, open with slight pressure from a set of tongs and remove pit, then dispose. (Remember that you will have a total of six pits!)

Once plums have cooked down into a homogeneous purple substance, add remaining mint leaves, then stir. Allow mint leaves to cook into the fruit sauce. Add honey, stir, and allow to cook further. Add quarter cup sherry wine (which, remember, will slightly sweeten the sauce) and bring back to a simmer. Turn to low, add ground ginger. Peel garlic, crush under a knife blade, and finely mince it; then add that to the sauce; simmer for no less than five minutes more. (Get the flavor of the garlic, which is also sweet but mildly pungent, cooked thoroughly into the bulk of the sauce.) Turn off burner, let cool.

Suggest serving over grilled lamb, with a side of quinoa and briefly steamed kale (2-6 minutes, to taste).

lamb with plum sauce—blog ready

 
Leave a comment

Posted by on March 14, 2017 in Recipes, Uncategorized

 

Tags: , , , , , , , , , , , , , , ,

Effective OpenAL with LWJGL 3

Effective OpenAL with LWJGL 3

Jesus Bloody Christ it’s been a while.

So, a lot of you are likely interested in developing on Java with LWJGL3 instead of LWJGL 2.9.*; as you should be. LWJGL3 has support for a lot of modern industry trends with older versions did not; such as multi-monitor support without back flipping through flaming hoops, or basically anything involving GLFW. It’s still in beta, I know, but it’s a solid piece of work and the team on it is dedicated enough to make it a reliable and standing dependency for modern projects.

Except for every now and then, when it happens to be missing some minor things. Or, more importantly, when there’s a dearth of documentation or tutorials on a new trick you’re pulling.

I can contribute, at least in part, to both of those.

OpenAL is the audio world’s equivalent to OpenGL; it’s a sophisticated and sleek interface to sound hardware. Many common effects and utilities, such as 3D sound, are built into it directly; and it interfaces sublimely with code already designed for OpenGL. Additionally, it’s also a very tight interface that does not take long at all to learn.

In the past, I would suggest using JavaSound for Java game audio, which is also a tight API, but it lacks these features. Most major audio filters have to be built into it rather directly and often by your own hand; and there’s no official guarantee of hardware optimization. However, what LWJGL3’s OpenAL interface now lacks can easily be supported by readily-present JavaSound features; such as the audio system’s file loader.

This entry is on, step by step, how one would do such a thing.

Let’s start with a basic framework. I’ve tried to keep a balance between minimal dependencies and staying on-topic, so I’ll suggest that you have both LWJGL3 (most recent version, preferably), and Apache Commons IO, as dependency libraries.

class Lesson {
    public Lesson() throws Exception {
        //Start by acquiring the default device
        long device = ALC10.alcOpenDevice((ByteBuffer)null);

        //Create a handle for the device capabilities, as well.
        ALCCapabilities deviceCaps = ALC.createCapabilities(device);
        // Create context (often already present, but here, necessary)
        IntBuffer contextAttribList = BufferUtils.createIntBuffer(16);

        // Note the manner in which parameters are provided to OpenAL...
        contextAttribList.put(ALC_REFRESH);
        contextAttribList.put(60);

        contextAttribList.put(ALC_SYNC);
        contextAttribList.put(ALC_FALSE);

        // Don't worry about this for now; deals with effects count
        contextAttribList.put(ALC_MAX_AUXILIARY_SENDS);
        contextAttribList.put(2);

        contextAttribList.put(0);
        contextAttribList.flip();
        
        //create the context with the provided attributes
        long newContext = ALC10.alcCreateContext(device, contextAttribList);
        
        if(!ALC10.alcMakeContextCurrent(newContext)) {
            throw new Exception("Failed to make context current");
        }
        
        AL.createCapabilities(deviceCaps);
        
        
        //define listener
        AL10.alListener3f(AL10.AL_VELOCITY, 0f, 0f, 0f);
        AL10.alListener3f(AL10.AL_ORIENTATION, 0f, 0f, -1f);
        
        
        IntBuffer buffer = BufferUtils.createIntBuffer(1);
        AL10.alGenBuffers(buffer);
        
        //We'll get to this next!
        long time = createBufferData(buffer.get(0));
        
        //Define a source
        int source = AL10.alGenSources();
        AL10.alSourcei(source, AL10.AL_BUFFER, buffer.get(0));
        AL10.alSource3f(source, AL10.AL_POSITION, 0f, 0f, 0f);
        AL10.alSource3f(source, AL10.AL_VELOCITY, 0f, 0f, 0f);
        
        //fun stuff
        AL10.alSourcef(source, AL10.AL_PITCH, 1);
        AL10.alSourcef(source, AL10.AL_GAIN, 1f);
        AL10.alSourcei(source, AL10.AL_LOOPING, AL10.AL_FALSE);
        
        //Trigger the source to play its sound
        AL10.alSourcePlay(source);
        
        try {
            Thread.sleep(time); //Wait for the sound to finish
        } catch(InterruptedException ex) {}
        
        AL10.alSourceStop(source); //Demand that the sound stop
        
        //and finally, clean up
        AL10.alDeleteSources(source);
        

    }

}

The beginning is not unlike the creation of an OpenGL interface; you need to define an OpenAL context and make it current for the thread. Passing a null byte buffer to alcOpenDevice will provide you with the default device, which is usually what you’re after. (It is actually possible to interface with, say, multiple sets of speakers selectively, or the headphones instead of the speaker system, if you would like; but that’s another topic.)

Much like graphics devices, every audio device has its own set of capabilities. We’ll want a handle on those, as well. It’s safe to say that if a speaker can do it, OpenAL is capable of it; but not all speakers (or microphones) are created the same.

After this, OpenAL will want to know something of what we’re expecting it to manage. Note that it’s all passed over as a solid int buffer. We’re providing it with a notion of what features it will need to enact, or at least emulate; with a sequence of identifiers followed by parameters, terminated with a null. I haven’t begun to touch all that is possible here, but this attribute list should be enough for most uses.

After that, create the context, make it current, check to see that it didn’t blow up in your face, and register the capabilities. (Feel free to play with this once you’ve got the initial example going.)

So, before I get to the part where JavaSound comes in, let’s start with the nature of how OpenAL views sound. Sound, in its view, has three components: a listener, a source, and an actual buffer.

The listener would be either you or your program user; however, the program would want to know a little about your properties. Are you located something to the left or right? Are you moving (or virtually moving)? I usually set this first as it is likely to be constant across all sounds (kind of like a graphics context).

Next, we have a method of my own creation that builds and registers the audio file. Forgive me for the delay, but that’s where JavaSound’s features (in the core JKD) come in, and I’m deferring it to later in the discussion. You will note that the audio buffers have to be registered with OpenAL; as it needs to prepare for the data. There’s a solid chance that you will have sound-processor-local memory, much like graphics memory, and it will have to be managed accordingly by that processor before you can chuck any data at it.

Let’s look at that audio buffer creator.

     private long createBufferData(int p) throws UnsupportedAudioFileException, IOException {
        //shortcut finals:
        final int MONO = 1, STEREO = 2;
        
        AudioInputStream stream = null;
        stream = AudioSystem.getAudioInputStream(Lesson3.class.getResource("I Can Change — LCD Soundsystem.wav"));
        
        AudioFormat format = stream.getFormat();
        if(format.isBigEndian()) throw new UnsupportedAudioFileException("Can't handle Big Endian formats yet");
        
        //load stream into byte buffer
        int openALFormat = -1;
        switch(format.getChannels()) {
            case MONO:
                switch(format.getSampleSizeInBits()) {
                    case 8:
                        openALFormat = AL10.AL_FORMAT_MONO8;
                        break;
                    case 16:
                        openALFormat = AL10.AL_FORMAT_MONO16;
                        break;
                }
                break;
            case STEREO:
                switch(format.getSampleSizeInBits()) {
                    case 8:
                        openALFormat = AL10.AL_FORMAT_STEREO8;
                        break;
                    case 16:
                        openALFormat = AL10.AL_FORMAT_STEREO16;
                        break;
                }
                break;
        }
        
        //load data into a byte buffer
        //I've elected to use IOUtils from Apache Commons here, but the core
        //notion is to load the entire stream into the byte array--you can
        //do this however you would like.
        byte[] b = IOUtils.toByteArray(stream);
        ByteBuffer data = BufferUtils.createByteBuffer(b.length).put(b);
        data.flip();
        
        //load audio data into appropriate system space....
        AL10.alBufferData(p, openALFormat, data, (int)format.getSampleRate());
        
        //and return the rough notion of length for the audio stream!
        return (long)(1000f * stream.getFrameLength() / format.getFrameRate());
    }

We’re hijacking a lot of the older JavaSound API utilities for this. OpenAL, much like OpenGL, isn’t really “open”, nor is it technically a “library”. So, having something around for handling audio data is helpful, and why bother writing our own when it’s already built into the JDK?

For JavaSound, you work with either Clips, or (more frequently) AudioInputStreams. You can read most audio file formats directly via AudioSystem.getAudioInputStream(…); in this case, I’ve elected to use a WAV format of LCD Soundsystem’s “I Can Change”, because James Murphy is a god damned genius. However, you can use anything you would like; to get it to work with this just drop it in the same source directory.

Next up, grab the format of the sound with AudioStream.getFormat(). This will provide you with a lot of valuable information about the stream. If it’s a big endian stream (which most wave files are not), you might need to convert it to little endian or make proper alterations to OpenAL. I’ve glossed over this, as endian-ness is not really a part of the tutorial and there are plenty of good byte-management tutorials out there.

I’ve elected to use format to check for the mono/stereo status (more are actually possible), and whether the sound is 8-bit or more frequently 16-bit. (Technically 32- or even 64- bit sound is possible; but there is actually a resolution to the cochlea of the ear, and you’re not going to bump into that outside of labs with very funny looking equipment. Even Blu-ray doesn’t go above 24-bit. Seriously, there’s generally just no point in bothering.)

Afterward, we load the stream into a byte array (I’m using IOUtils for this for brevity, but you can do it however you like), and the byte array into a ByteBuffer. Flip the buffer, and punch it over to OpenAL, which will take care of the rest of the work with it. Afterwards, we will eventually need the length of the audio stream, so calculate it as shown and send it back to the calling method.

After the buffer’s been created and the length of it is known, we’ve got to create a source for it! This is where most of the cooler built-in effects show up. alGenSources() creates a framework for the source; alSourcei(source, AL10.AL_BUFFER, buffer.get(0)) ties it to the sound buffer. You’ll also see that I set up AL_GAIN and AL_PITCH, which are fun to play with.

You’re almost done!

To actually play the buffer, you use the source. alSourcePlay(source) starts it. After that, I have the Thread sleep for the calculated length of the sound, just so we have time to hear it. At the end, I call alSourceStop(source) to demand an end to the source.

Lastly, I delete all sources. You might also want to delete devices, if you’ve done anything silly with them; this is very low-level access. You now have everything you need to load audio into your games and programs, and if you happen to bump into an SPI for a preferred format, it will now also be enough to get you going on OpenAL as well.

 
Leave a comment

Posted by on July 4, 2016 in Java, Programming

 

Tags: , , , , , , , , , ,

A Studio, A Temple

I have a beautiful place carved out of the emptiness that was before. Two desks, one metal, the other dark cherry, formed into an “L”, my desktop on one and my Raspberry Pis, electronics, and embedded systems on the other. A space for my coffee, two surge protectors, an X-Box controller for the times when a mouse doesn’t do the job. A top-notch soldering pen, poised on the glass desk between my two monitors, unplugged and with plenty of space for safety of course.

This place used to be a living room, which we did little living in. I’ve adopted it, and adapted it, into a workspace. The thing about a studio is that it is, by definition, a temple to one’s mind. Nothing goes here that I wouldn’t have bouncing around in my head, whilst I’m trying to actually get something done. This place is my mind space.

I have a whiteboard on the wall now; four feet by three feet, with a complete collection of four-color markers (two black, one each in red, green, and blue) and an eraser, with a cleaning spray. I do use it. I’ve been mapping my thoughts to it for some time. It’s good when a paper pad (which every engineer should, still, always have) just isn’t enough. It doesn’t have the advantage of graph paper, but some occasions require something more than a note. Right now, I’m weighing the advantages and disadvantages between using LWJGL or JavaFX for a programming project. I would not have found it to be as easy without the marker board.

The floor bothers me. It’s an awful blue carpet, one which may never have been that attractive and hasn’t gotten any better with age. I’m hoping to replace it with some stone tile, something in a nice tan color. Not just linoleum, nothing too cheap. That would be reckless and self-sabotaging; I can wait to afford it. A nice wheat color would blend well with the furniture. The walls are a subtle greenish white, hard to tell in the lamplight late in the evening. I might paint them, it wouldn’t take long. Something bright, nothing that would contrast with the flags and the artwork hanging on them, or the statues and idols poised throughout the shelves.

When I enter this space, I become someone new; someone I need to be. I have OpenGL/CL/AL projects going on the desktop, bioelectrics going on the steel desk, and little room for doubt or distraction. My office used to be a plastic desk in the kitchen, where I would pound out every ounce of inspiration my mind had until I ran out of strength. I’m stronger in here. This place is, indeed, a sacred one to me.

 
Leave a comment

Posted by on January 26, 2016 in Innovation, State of the Moment

 

Tags: , , , , , ,

Never Stop Running

Never Stop Running

You know that burning feeling you get, in the center of your chest, your very core, when you just need to get something magnificent done? Not just a frequent thing like doing laundry, or cleaning the house, but something life vindicating? Because I’ve got that lately. I’ve spent the past month taking care of all of the heat that the part-time is getting, purely for the sake of this; January should be just boring enough to finish everything off.

I say I’m a systems engineer, but generally only when I want to change subjects. The long answer is that I build machines that build universes. I have a degree that redefined what it means to be “hard-earned”; in the fields of Physics and Neuroscience. I’ve been programming since I was a tyke. I’ve been writing since I was ten years old. All of this ultimately accumulates toward the same end goal. The whole point of building simulators is to answer “what if”. Stories, games, the entertainment of the future; it’s all in systems and simulation. Everything is and always has been about that.

Back in the 1960s and 70s, before the personal computer was standing up and walking on two feet, aerospace companies like Boeing used to build tiny scale models of their aircraft before the actual prototype was ever constructed. The idea was, given that a specific part goes out and needs to be replaced on such-and-such an aeroplane, what are we going to have to pull way to get to it? What would be the cost model? If half of the aeroplane had to be pulled apart to get to a specific gearbox, then the lifetime of that gearbox might be the lifetime of the aeroplane. The design might be too expensive to fix.

Were these micro-models expensive? Absolutely. However, they were much cheaper than figuring this out only after the aircraft was built. They were worth every penny, and every replacement model was worth every penny in turn. I look at this chop-shop job, and I remind myself that. It’s my funding and my micro-model.

Yesterday, I finished off the better part of a detailed three dimensional collision detection system, with an outline to covering four dimensions if the need ever arises. It’s as modular and expandable as it can get. It was harder than it sounded, it was twice as much fun, and it’s completely self-validating. When I’m done with this, all I’ll need to worry about is penning, sculpting, composing, and storyboarding.

That, my friends, is the best Christmas present ever. Happy Solstice!

[Note: image is a rambled selfie with tonight’s desert, an orange chocolate mousse with raspberries and freshly whipped cream]

 
Leave a comment

Posted by on December 26, 2015 in State of the Moment, Uncategorized

 

Tags: , , , , , ,

Gallery

The Marvel DubSmash War Of SDCC 2015 – When Agents Collide, Everyone Wins!

The Marvel DubSmash War Of SDCC 2015 – When Agents Collide, Everyone Wins!

The Insightful Panda

This past Season on Agents Of SHIELD and Agent Carter, we saw a lot of ‘almost’ Civil Wars: Coulson’s S.H.I.E.L.D. vs Gonzales’ S.H.I.E.L.D., Agent Carter vs the SSR. Thankfully those two resolved peacefully and it seemed like we’d have to wait for Captain America: Civil War to see the next battle. Well, we were wrong because a new war brewed this past weekend at San Diego Comic Con when all these Agents met up at the fists lip syncing flew! DubSmash War!

At approximately 8:08 PM on Jun 10th – sometime after the Marvel TV Panel – the challenge was issued by Clark Gregg (Agent Phil Coulson) and Chloe Bennet (Agent Skye/Daisy).

What was this strange new battlefield? Hayley and Atwell (Agent Carter) and James Darcy (Edwin Jarvis) were up for the challenge, but first wanted to practice a little…

… and then, officially accepted the challenge 2 hours later that…

View original post 407 more words

 
Leave a comment

Posted by on July 13, 2015 in Uncategorized

 

Google Deep Dream ruins food forever.

Giger rest in peace. He would have had so much fun with Deep Dream.

Ken Vermette

Google Deep Dream is an interesting piece of AI software which looks for patterns in pictures, much like humans may look for patterns in clouds. Deep Dream has been trained to find a few things, like eyes, animals, arches, pagodas, and the most fascinating part is that Deep Dream can also spit out what it “saw”. Then Google opened Deep Dream to the public and people started loading tonnes of images into the system, and when you combine food with Deep Dream it turns into the stuff of nightmares.

RUN NOW OR FOREVER RUIN FOOD FOREVER! Here’s pictures of food turned to ghoulish nightmare-fuel courtesy of Deep Dream;

Via Steve Kaiser Via Steve Kaiser

Nope. NOPE. Great start. Never eating takeout again. At least nothing bad can happen to the humble doughnut.

Duncan Nicoll, thank you. Via Ibitimes Duncan Nicoll, thank you. Via Ibitimes

GREAT. FANTASTIC. I didn’t like doughnuts anyway. ARE THOSE LEGS?

Ibitimes also had this. Spaghetti & nightmares.Ibitimes also had this. Spaghetti…

View original post 98 more words

 
Leave a comment

Posted by on July 11, 2015 in Uncategorized