Thursday, August 30, 2012

Puzzle Quest Solver (2007)

In 2007 I got my hands on Puzzle Quest DS, an awesome game I played many times over.

I started building a Lego robot to play the game on DS for me, using a webcam for visual recognition, but I never finished writing the software; I was in the middle of recognizing tiles on the game board when something else caught my interest.  I only have a few pictures of this robot, but it was pretty cool.  Built with first-generation Mindstorms, it sported two arms in SCARA configuration: an upper arm (red) on the end of which was attached a lower arm (green).  The rotations were performed in the plane of the DS's screen, to position a Lego "stylus" which was brought down on the screen using a Lego micromotor (blue).  This gave a cylindrical coordinate system.

Robot mockup; webcam and DS models from Google 3D Warehouse.
The robot, in two parts.  The three tires hold up the DS; the articulated arm is on the left.  The long part on the right serves as an attachment point for the webcam.
A closer view of the interesting part of the robot.  You can see the micromotor which moves the end effector in and out.  The end effector is the long axle with a blue circular part that limits travel.
The motor on the right controls the upper arm; the motor on the left controls the lower arm through a series of gears along the upper arm.
A better view of the lower arm mechanism.
The DS, as seen from the webcam's vantage point.  You can see part of the stylus mechanism on the left.
Though the vast majority of the game is random-based, at specific points in the game, preauthored challenges are presented.  Later on these get quite tricky, and I much preferred the meta-challenge of writing a solver for these instead of banging my head on them for hours on end.

Below is sample output from my solver.  The solver is a simple brute-force search through the space of legal moves.  It prints backwards, so the solution must be read bottom-up:


  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ........
6 ........
7 ........
8 ........
C8/D8 -> 3
  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ........
6 ........
7 ........
8 SS.S....
B8/C8 -> 3
  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ........
6 ........
7 ...S....
8 SRSRR...
F7/F8 -> 1
  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ........
6 ...SRGG.
7 ...RSRKK
8 SRSRRSSG
C7/D7 -> 3
  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ........
6 ...SRGG.
7 SSRSSRKK
8 SRSRRSSG
C6/D6 -> 3
  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ........
6 $$S$RGG.
7 SSRSSRKK
8 SRSRRSSG
F6/F7 -> 3
  ABCDEFGH

1 ........
2 ........
3 ........
4 ........
5 ...$RG..
6 $$S$$RG.
7 SSRSS$KK
8 SRSRRSSG
G5/G6 -> 3
  ABCDEFGH

1 ........
2 ......G.
3 ......K.
4 ....RGR.
5 ...$SSR.
6 $$S$$RS.
7 SSRSS$RK
8 SRSRRSSG
B8/C8 -> 1
  ABCDEFGH

1 ....RG..
2 ...$SSG.
3 ..S$$RK.
4 ..RSS$R.
5 ..SRRSR.
6 $$GYYGS.
7 SSBKKBRK
8 SSRKKRSG
A6/A7 -> 3
  ABCDEFGH

1 ....RG..
2 ...$SSG.
3 $.S$$RK.
4 G.RSS$R.
5 G.SRRSR.
6 S$GYYGS.
7 GSBKKBRK
8 SSRKKRSG
B6/B7 -> 3
  ABCDEFGH

1 ....RG..
2 ...$SSG.
3 $$S$$RK.
4 GGRSS$R.
5 GGSRRSR.
6 SSGYYGS.
7 GGBKKBRK
8 SSRKKRSG
H5/H6 -> 3
  ABCDEFGH

1 ....RG..
2 ...$SSG.
3 $$S$$RK.
4 GGRSS$RK
5 GGSRRSRS
6 SSGYYGSG
7 GGBKKBRS
8 SSRKKRSS
H6/H7 -> 3
  ABCDEFGH


1 ....RG.K
2 ...$SSGS
3 $$S$$RKG
4 GGRSS$RR
5 GGSRRSRR
6 SSGYYGSS
7 GGBKKBRR
8 SSRKKRSS

Lego RC Trike (2005)

In 2005 I bought an awesome Lego RC set, 8475, for like 100$ on closure sale.  I hesitated before buying it because I wondered whether it would even be worth the money; but really, once built, is it ever awesome!  3-step gas and steering, plus an extra digital channel on two triggers.  The motors are quite powerful, enough to spin out and drift some.

After building the two models for which they supply instructions, I thought I might tackle an inverted trike design, kind of like a T-Rex:


It's a shame I have no video of it, it was pretty cool.  The handling was absolute garbage, because the rear end was way too light and the steering angle too shallow, yielding a gigantic turning radius, but it was fun to see go.

From the top.
Underside.

Rear suspension.
It took me a little while to figure out how to build the steering linkage, to widen the track:




photon (Rendering Architecture Experiment) (2003)

In my CS488 class, right after Semiramis, the final project was one where students created their own assigments, within a structure of guidelines.  I chose to create a simple rendering pipeline, from modelling package to rendering.  I gave this pipeline the (very original) name "photon".

Modelling Aids

As a modelling package, I used Milkshape.  My first two tools were created to help in modelling, because as great as Milkshape was, it was missing a few key features.

To work on the car I wished to model, I required geometry mirroring, something Milkshape did not handle at the time.  To remedy this, I wrote a stand-alone tool called msmirror which looked for specific tags in group names and mirrored geometry right in the .MS3D file.  This tool operated on the command-line, and simply transformed the source file by adding or re-mirroring existing geometry.  This way, I was able to mirror one side of the car to obtain the other.

I also required more sophisticated UV mapping and texture paging functionality than was available.  I thus created a second tool to process .MS3Ds, called msmapper.  This interactive tool allowed me to map and texture page each group within the model individually, changing projection types, etc., and save the results back to the model.  This granted me much finer control over UVs and texture pages than was otherwise available.

With great pain, I modelled a Subary Impreza WRX STi by pushing vertices, which was great fun and a good learning experience.  The resulting polygon mesh is gross, but hey you have to start somewhere.




I still had no good texture for the wheels, however, when one day I saw a beautiful STi parked outside a restaurant right beside where I lived.  I walked right into the restaurant and went from table to table until I found the owner, from whom I requested permission to photograph the car and its wheels.


Triangle-Batching Tool

Next, I wrote a tool to perform fanning and stripping on my modelled geometry, since on my GeForce 256 those were the fastest primitive types to use.  They cut down both on bandwidth and transformation costs.  (Nowadays different bandwidth limitations and large post-transform vertex caches frequently make it more interesting to use triangle lists instead, but that wasn't the case here.)

My tool used a a few simple heuristics to do its job.  It would begin by first splitting any vertices as required to handle any texturing and normal requirements, since fixed-pipeline OpenGL doesn't support independent indexing for separate attributes.  Next, it would create fans around any vertex used by 8 or more triangles:



Lastly, it would create strips by brute force, using a few heuristics.  It would first examine each of the remaining unbatched triangles and locate the triangles with the lowest number of unbatched neighbours (treating any triangle pair with an edge in common as neighbours):


In the set of triangles with the lowest number of neighbours, it would iterate over each triangle's vertices and sum the number of triangles who used that vertex.  It would the prefer triangles for which the sum was lowest:


The goal of this last heuristic was to make the algorithm try to strip "dangling triangles" on the edge of the mesh sooner, because these were more likely to become orphans later down.  Without it, the algorithm frequently picked triangles in the "middle" of the mesh as starting points, which not only left a bunch of orphans on the edges but also sometimes "cut" an arbitrary stripped path from the center of the mesh to its edge, cleaving the space of available triangles for strips to be formed afterwards.  I'm definitely not sure this last heuristic is advantageous in the general case, however - it may just be because of my input data.  I would have needed to try my tool on more data sets to be sure.

Once the set of starting "problem triangles" was known, a simple brute-force search across all edges of the triangles in the starting set was made to find which edge and winding order yielded the longest strip.  This strip was saved and the process was repeated until all triangles were batched.

The quality of the resulting strips and fans depended on the inputs:

Generally pretty crappy stripping, due to wonky input geometry.  Note how the door and windows are a little better since I had an easier time modelling them, and as such the edge topology was much more intelligible.
Great fan and strips for the wheel geometry.

Run-Time Library

Afterwards, the idea was to get this geometry rendering as fast as possible.  I wanted a few simple effects: transparency for the glass and a volumetric lighting effect for the headlights.  (This last effect simply leveraged destination alpha with a special blend mode to lighten the frame buffer only where the dust particles appeared.)


I also included a real-time blurring test, done by rendering many lightly-transparent quads with the same texture, slightly offset from each other:



My goal with this project was to try out a simple method of minimizing graphics pipeline state changes while rendering a scene, since state changes were (and still are) quite expensive.  In order to concentrate on this, I stuck to the OpenGL fixed pipeline model; the scene didn't really require anything programmable anyway, and that axis of exploration was sort of orthogonal to the state change issue.

The idea implemented by photon is simply that each logical object that requires rendering - be it a solid 3D model, a transparent 2D HUD element, whatever - registers a so-called "render key" with the rendering system.  As far as the rendering system is concerned, this key is a kind of proxy for the rendered object; when rendering, all render keys in the system are traversed in a specific order outlined below, and they are all told in turn to render their payload.  In this fashion the entire scene is rendered.

What's more interesting is what each render key contains, and how the rendering order is determined.  When generated, a render key is made to contain a 32-bit "hash" of the rendering state required for the payload to be delivered.  (More bits could be used for a more complex set of states.)  For example, it might indicate that it requires a perspective projection, with client states enabled, texturing enabled, no texture coordinate generation, normal normalization on, lighting off, fog off, depth testing enabled, no culling, standard blending, all in the 2nd global rendering pass with a given texture ID, etc.  This information is combined into the 32-bit key in such a way as to encode information relating to more expensive state changes in the high-level bits of the hash, and information pertaining to the least expensive state changes into the lower-level bits of the hash.  For example, changing projection matrices is much cheaper than, say, enabling or disabling lighting wholesale; thus, the bit indicating whether a perspective or orthogonal projection is required would be placed somewhere in the least significant bits of the key, whereas the lighting-required bits would probably be placed somewhere in the middle of the key.

To render, all the the keys are sorted by value and traversed in the resulting order, from lowest value to largest value.  Before calling the payload back, the rendering pipeline is setup as required by the information in the key.  In a "normally-populated" scene, this means that all objects which require similar rendering states will be batched together.  For example, all solid objects which require lighting, texturing, and fog will be rendered contiguously, and all objects requiring transparency, no depth test and an orthographic projection will also be rendered contiguously.

This has the benefit of minimizing the number rendering pipeline state changes, and also prioritizing them so that the most expensive ones are executed the least often.  By comparing previous and next rendering keys, it also trivially allows the system to perform the absolute smallest amount of state change required by XORing the two keys to see which bits in the state have changed.  This also implicitly does away with any "recall last state set by program and only set it if the value has changed" type of logic.

This arrangement also allows for a nice bit of decoupling, because the payload (be it procedural, like a HUD, or data-driven, like a player model) does not need to know "where" to insert inself into the traditional rendering buckets; it merely has to request the correct state when creating its render key.  The payload code is made more elegant, because does not need to deal with any state setup; it only has to ask for the correct state when building its render key, and it can be assured the rendering pipeline will be setup correctly when the renderer calls back.  As a bonus, all "stale state" problems (i.e. "I changed my object to render in a different part of the code/rendering pass/whatever, but now it doesn't render properly!") are also solved because each render key contains full information about the required state.

Conclusion

Though each part of the project was simple in isolation, they all worked great together and the final rendering path ran very fast, much fast than anything else I had tackled before.


Tuesday, August 28, 2012

Crammed! (Tetris Clone) (1999)

Our IB school was tiny, with about 80 people (including staff) for 3 classrooms.  Needless to say, space was at a premium, and so was born the premise for Crammed!: Tetris, but with people.  You literally arrange people into as tight a space as possible, completing lines.  I wrote this game in the spring of 1999, and created the small-scale animated graphics; a friend of mine, Olivier, made all the large-scale matte paintings.  These drawings, which serve as backgrounds for the intro screen and the different levels, are more or less inside jokes referring to our school life.

The game is a simple Tetris clone, with a few variations.  Other than the usual single-player mode, you can play 2-player versus, where completing two or more lines sends an equivalent number of "garbage" lines to your opponent, and 2-player co-op, where two players play simultaneously in a wider play area.  You can also edit your own block shapes, as well as the probabilities of each block falling.  (The game did not use the 7-bag system; this was still a bit before the big wave of standardisation across games.)

The game makes use of my handwritten font, created with FontX.  All graphics were processed by E.


FontX Font Editor (1999)

In 1999 I was still working quite a bit in mode 13h.  I had written a few font libraries over the years, but for an upcoming project of mine, Crammed!, I wanted to use a digitized version of my handwriting.  Thus, I wrote out most of the the ASCII table by hand and scanned the result.  I tweaked a few of the characters in Photoshop on my dad's Mac, brought everything over to PC as BMPs, imported into E again, and converted all the characters to my custom sprite format.

Then, I wrote this little application, FontX, to manage the font that would be made up of these characters.  FontX and its runtime library supported variable font sizes, variable-width fonts and tracking (global character spacing).  It even supported kerning, though it did not do so by character pair; instead, it allowed each character to specify an amount of space to be applied to its left and right.  There was also some measure of auto-fitting for all of the above given raw character data, to allow the user to concentrate on adjusting special cases to the baseline (mostly characters with descenders, though in the case of my handwriting each character was adjusted individually).

Though it was pretty simple, I consider this project a great success, because the fonts it produced looked great.


SynthX (1998)

In the mid-to-late 90's I was involved, however lamely, in the tracking/demo music scene.  I sought to remedy my relative lack of access to synthesis hardware by writing my own software synthesizer; thus was born SynthX.

The synthesis model is quite simple.  There are 16 operators.  For each operator, using a mouse, you can draw the operator cycle waveform as well as its panning, amplitude, and pitch envelopes.  Rendering simply consists of additively mixing all the operator outputs together.  Aside from the obvious application of generating samples for use in tracking, the synth enables the user to explore the mapping between sound and shape of a waveform, by drawing different waveforms into the operators and seeing what they sound like.

Though many or most of my programs made use of my home-grown graphics/mouse/joystick/font/etc. libraries, this was one of my rarer programs to make use of my SoundBlaster library, which leveraged DMA to transfer sound data to the card.


2198 (2D Space Shooter) (1997)

As mentioned elsewhere, I've always had a passion for 2D shooters.  I did actually wind up writing a prototype for a game I was eventually going to name 2198.

This prototype features a scrolling tiled background system, multiple animated weapon systems, and a single enemy type which comes and fires a single projectile at you.  This projectile can hit and damage the player.

I modelled the player's ship in TurboCAD 3D (under Windows 95), rendered it, and imported it into E.  The animated flames are simple pixel art created in the same, as are the various projectiles and the enemy ship.  Note how the flames move closer when the ship tilts as the player moves sideways, to remain aligned with the ship's reactors.


Bob Screen Saver (1997)

For some reason, in our last year of high school, some of my friends and I took to calling each other "Bob".  In light of this fact, I made a simple screen saver on a rainy afternoon.  The word "Bob" bounces around ad infinitum, with a slow palette rotation that takes about 8 minutes to complete a revolution through all the hues.


Funktris (1997)

I think it is likely the majority of people who grow up to be game programmers write at least one Tetris clone  in their lives.  This was my first.

There is not much to say about it, other than it was loosely based on Microsoft Tetris, the variant I played at the time.  There is a cute block-dropping effect when lines are completed.


Fire and Text (1997)

I've always had a soft spot for 2D space-type shoot-em-ups, and I always wanted to write one.  I loved Raptor: Call of the Shadows and sort of looked to emulate it (minus the strange difficulty/upgrade path curve).

I wanted to display my shoot-em-up's intro text using a fire effect, with the flames themselves forming the letters of the text.  I tried a few different approaches and wound up settling a relatively simple scheme, where the frame buffer essentially holds the "temperature" of the fire at each pixel.  The palette is setup using a simple sequence of linear ramps of colours from yellow to red, then to black, dark drey, and black again.  This determines the colour of a fire "element" as its temperature decreases.

Every frame, the frame buffer is scrolled up by a pixel, and the color value of each pixel is decremented; this simulates the temperature decrease as the fire rises from the bottom of the screen.  A random series of 4x4 pixels clusters on the screen is selected, and the color value for each pixel in the cluster is incremented.  A second random series of 5x5 clusters is selected, and the color value for each pixel in the cluster is decremented.  Lastly, as text scrolls through the bottom-most 16 lines of the fire, the text is magnified and frame buffer is incremented everywhere a pixel is non-transparent in the source font.


Monday, August 27, 2012

Fireworks (1996)

For some reason, I have always loved fireworks.  I enjoy them tremendously, and as a kid I often wished I could somehow be involved in putting together a show.  (This was before I realized how accident-prone I was and how likely that idea was to result in missing fingers.)  The next-best thing for me was to write a firework simulator, which I did in the summer of my quasi-sweet 15.

There are four versions of the simulator in the video below, supporting various levels of detail, simulation fidelity, pyrotechnic bomb types, and performance.  Most feature a simple palette trick that "lights up" the smoke trails when bombs go off.  The simulator follows a script of bomb launches and timings, demonstrated by the bombs which display text.  The last cut shows an assembler-only version of the simulator.

I wasn't so well-versed in dynamics back then and did not model drag, an important aspect of the look of airborne pyrotechnics.  As a result, the trajectories look a bit off, but this did not hamper my enjoyment.


Tuesday, August 21, 2012

E v2.0 (1997)

In 1997, about a year after writing E, I went on a 72-hour coding rampage during the summer and wrote the vast majority of a second version of my editor.  In terms of functionality, the biggest difference was mouse support pretty much everywhere, as well as a region-filling tool.  The interface was also jazzed up a bit.  Code-wise, the new version was much cleaner, but organization took a hit when I busted the 64k segment size limit and had to break it up into units using overlays.

Looking back I'm somewhat taken aback at the amount of functionality I crammed into the UI and mouse support: drag and drop of color swatches, windows with checkboxes, radio buttons, command buttons, file browsing dialogs, scrollbars, the works.  Here's a partial functionality list:
  • mouse support everywhere
  • individual pixel drawing, lines, rectangles, ovals and fills
  • many methods to pick colors:
    • a full-screen palette
    • a mini-palette (full green-blue spectrum for a given red value)
    • red-green-blue level pickers
    • smooth shading gradient swatches; the user set either end of a gradient and the editor computed the intermediate colors
  • a console to enter commands
  • rectangular area selection
  • many effects like blurring, sharpening, adding noise, tinting, black and white, etc.
  • a basic but amusing-looking windowing system


Sunday, August 19, 2012

Vectorised Digit Morphing (1997)

In 1997, as a prelude to a few other game-writing projects I had in mind, I wanted to try out a simple way of rendering digits using line segments.  All digits would be rendered using the same number of segments, overlapping segments when required; when changing a digit from one to another (i.e. to display a score or something), the segment endpoints would be interpolated.  This actually gave a somewhat cool effect I've recorded for posterity here.

Skywars (1997)

With a bit of time to spare in 1997, I started writing a small battleship-style game.  I never quite finished it - you can't place your ships yet, and the computer's ships are always statically placed in a corner - but it was starting to take shape.  You can actually start the game and either win or lose, with victory screen and all.


Bookshelf (2008-2010)

In September 2008 I started designing a big bookshelf for my girlfriend; about a year later, in September 2009, I started building it; and about a year later, in 2010, I finished it.


I know very little about woodworking and approached the build as a fun challenge, not really knowing what I was getting into.  It was great experience; I'm quite happy with the results, even though I used some - ahem - "unorthodox" joinery methods.  On the whole, it was a bit challenging though because I essentially built the whole thing in our living room.  If I had to undertake a similar project again it would have to be in a dedicated work area.  Imagine living in this for a year:



My dad lent me some of his power tools, and gave me counsel when I was unsure about things.  I went through well over 20 iterations of the design, from high-level re-imaginings to fiddling over tiny details.  Most of the design work was done in Sketchup, and everything was modelled down to the millimeter.

Overview; at that point in time we were still thinking of adding crown moldings.  Some detail missing.
Back view.
Detail on inside fixtures, on the ends of the bookshelf.
X-ray view of one of the versions of the adjustable footing.
Final levelling assembly.
There was a fair bit of lamination involved:

9 clamps per shelf.
24 shelves to laminate.
 I made some jigs and drilled lots of holes:



I made a big adjustable routing jig that I had to position with the help of some spreadsheet calculations because the jig nuts didn't sit flush with the wood and I had to compensate for jig tilt (i.e. it didn't hit the router base a predictable 3" from the bit center on certain edges, but my spreadsheet partly compensated for that):

Slot through entire width of wood.
My nice Rubbermaid sawhorse.
Gunning for a rounded rectangle depression.
Victory!
I used a joinery technique inspired from my years of Lego constructions:

2x8 plate on some base plates.
Looks OK.
Trusty sawhorse! 
Getting ready to screw another bracket in place.
One of the bookshelf ends before it was trimmed to fit the base boards.
Close-up of my sort-of-handiwork.
I had modelled the moldings in Sketchup, and cut out the profile from the base of the bookshelf:





And eventually I test-assembled a unit to make sure things fit properly:



Everything was ready to paint, assemble and level:





I eventually spent an epic day painting, and another assembling everything.  I used my bench power supply (which I'm eventually going to write up) to rig a power supply for my camera, connected the camera to my PC, and shot a large series of timelapse pictures which I ultimately made into a movie using Avisynth.  Here's what that looks like: