Sunday, November 4, 2012

Progress on Shi....sigh

Wow, no posts for over 3 months.  Part of that is the month long vacation I took at the beginning of August, and the other bigger part is just keeping up with my job.  Perhaps I'll talk about my vacation in another blog, as it was...I guess in a sense a life-changing event for me.

But alas, work has kept me far too busy to do anything with Shi.  Working 55+ hrs a week pretty much makes "hobby time" non-existent.  Despite that, I've been working on some posts detailing how to do debugging on a linux kernel.  Part of the reason I like to put up blogs here is so that I can refer back to them when I wonder "how in the hell did I do that before??".

For now though, Shi has been put on a hold.  I still have ideas for it, and for a little while, I was getting better at C++ so that I could use the llvm libraries to implement Shi.  To console myself, I am starting to watch the videos for a programming language design class at Brown University.  I've only watched the first two classes, but definitely it's been interesting so far.  All the more so because the professor will be using Racket Scheme to implement a subset of python :)

http://www.cs.brown.edu/courses/cs173/2012/OnLine/

Tuesday, July 10, 2012

Getting back into C(++)

It's been a crazy last couple of weeks.  I started my new position as a linux driver developer on June 18th, followed shortly by the crazy Colorado Springs wild fire.  But, once again, I get to dive deep into native programming and get better at exactly how the linux kernel works.  Unfortunately, my C(++) skills have gotten rusty in the last 4 years or so.

It's not like I have written Zero C(++) programs, but they have been few and far inbetween.  So the last few days while I've been at home, I've been re-reading Bruce Eckel's free book on C++.  The reason I am reading this, as opposed to my old copy of C: The Complete Reference, is that A) I need to get better at C++ and B) Bruce Eckel's book basically compares and contrasts many of the differences between C and C++.

In regards to the 2nd aspect, I like that this book can essentially help me kill two birds with one stone.  It highlights many of the untyped features/pitfalls of the C language, and what C++ does to overcome them.  But by covering how C would do something, and then contrasting it with the C++ way, it's kind of like getting a refresher on C while learning C++.  Because of this, it is recommended that you know the fundamentals of C before reading Bruce Eckel's book.  I still highly recommend this book because it's one of the few books that actually talks about header files, what inclusion guards are, and at least a brief look at how and what linking object files is for.  Bruce also wrote a 2nd volume for C++, and it includes are more thorough examination of some advanced topics in C++ (eg, templates and the STL).  I also bought some notes from Scott Meyer's on C++11x since I think if I am going to write in C++, it may as well be the newest version with some of the included goodies.

As to the first point listed above, I need to get better at C++ because LLVM is written in C++.  I have been somewhat concerned to learn that the lead programmer (and probably others) for the LLVM project is paid by Apple, and thus the Mac is getting all the love (for example, LLDB only works on Mac OS X, and libc++ likewise is OS X only).  I have no love for Apple (flame me all you want, but their tyrannical control is almost the worst of any big company I know of, but as the article I linked to indicates, this is because I am all for freedom, even if that freedom requires a learning curve).  I find it ironic that in this country that is supposed to love freedom so much, we are willing to give up so much of it to corporations who dictate to us how things will be or makes it "just work" even if making it just work takes away my freedoms.  If you are going to argue that the "free market" handles this by letting the consumer decide to choose the company they want, tell that to all the litigation happy companies that use absurd patents to enforce their way of doing things.


But, I will stick with LLVM for Shi, since it is still an open source project.  Yes, I am still working on it, albeit very slowly.  I'm currently alternating between 3 books now, a book on compiler design, the SICP book, and a book on comparative programming languages.  Not to mention getting back up to speed on the linux kernel, and familiarizing myself with Gambit scheme.  Right now though, I am focusing on lexical analysis, or the ability to discern tokens from a text stream.  I decided to go full on with C++ for the lexer, so I've also been looking at using Boost's Regex library to help me do this.  The little tutorial that the LLVM project gives is just way too trivial, so I'm just going to plow through the Basics of Compiler Design book.

Fortunately, the r7rs draft already has a pseudo context free grammar, so that will help me figure out what to tokenize.  Of course, shi isn't going to be r7rs compliant...I just want to use that as a starting point.  I intend shi to be more grammatically similar to clojure actually, as I find that syntax easier to read than scheme.  I also like the type hints (annotations) from clojure better.  But of course, the biggest thing for me is going to be the ability to generate native code on the fly via LLVM/clang, so that I can call libraries dynamically and without having to do any weird data marshalling (my thought is the ability to essentially #include header files...which is one of the reasons I am looking at gambit scheme right now, to see how they compile scheme code to C code).

Friday, May 11, 2012

EBNF and lexer time

So, what's the first step in making a language?  Honestly, I don't know :)  I'm kind of winging this as I go.  But the starting point seems to be:

1) Decide on your features
2) Decide on your grammar
3) Make an EBNF to describe the language
4) Create a lexer
5) Create a syntax parser
6) Create a compiler/interpreter/VM

I have a rough feature set in place, so I think the next logical step will be to create the CFG and/or EBNF that describes the grammar.  What's a CFG and EBNF?  A CFG is a context free grammar.  I can't go into too much detail, but if you pick up a book on formal languages, it should decribe what these are.  The EBNF is the Enhanced Backus Naur Form which is a recursive description of the structure of a language.  Given an EBNF, there are parser generators like ANTLR or BISON which can spit out parser generators for you.

But let me step back a moment.  What is the difference between lexer and a parser?  And what is a lexer anyway?  One of the first things that a compiler or interpreter must do is recognize the tokens in a string.  In fact, lexers and tokenizers are synonymous basically.  A token is a discrete lexical unit (see why the two are the same?).  Take for example this sentence.  The words "Take", "for", "example", "this", and "sentence" would all be tokens.  Does white space always delineate tokens?  Not necessarily.  And often, things other than white space can delineate tokens.  Take this example:

int i = 2+3;

What are the tokens? [ int, i, =, 2, +, 3, ;].  But notice that no white space separates 2+3, and yet those are 3 separate tokens.  This is why you need a lexer, and ideally you need a lexer generator like YACC.  But how do you make a lexer?  If you have used regular expressions before, this kind of looks like a regex doesn't it?  But how do regexes work?  Fundamentally, a regular expression defines a "regular" language (a language is regular because a regular expression can be written to recognize all strings producible by that language).  In turn, regular expressions can be converted into NFAs (non-deterministic finite automata) which in turn can be converted into DFAs (deterministic finite automata).

If you are curious about regular languages, regular expressions, NFAs, DFAs, pushdown finite automatas and context free grammars, then I recommend you pick up a good book on Automata Theory.  People familiar with finite state machines and graph theory really won't have any problem picking up on automata theory.  To extremely oversimplify things, an NFA has states, one of which is a starting state, and one or more states (including possibly the start state) is an accepting state.  States can be transitioned to by "consuming" an element in the string, or possibly not consuming anything at all (the epsilon transition).  The "consumption" of the characters in the string leads you to a state, and once the string is fully consumed, either you are in an accepting state or not (or possibly a character in the string has no transition, in which case it's an unrecognized string).  They are called non-deterministic, because each vertex (or state) can have more than one possible transition.  A DFA on the other hand can only have one possible transition per state.  So from a graph point of view, in a DFA, you have a directed graph with only one outgoing edge per vertex (but possibly many incoming edges).

Ok, so that's all well and good...but how do you actually PROGRAM one.  Well, that's what I intend on doing :)  In the next few blogs, I'll put up some of the code that actually performs the lexing for a scheme like grammar.  Originally, I had thought about writing this in scheme, but since LLVM is written in C++, I might do it in C++ instead.  But, I'll probably eventually do this in scheme anyway, just to get some practice with it.  I'll also intersperse this with the beginnings of an EBNF for Shi.  It seems to me like the EBNF is really the first thing I should be doing, as it will contain valid tokens that the lexer has to recognize.

I'm a developer again

It's official: in the next few weeks, I'll be doing linux driver development at my company.  I hope I can bring some of my testing experience into this position, and having had this experience, I can definitely say that all developers should have a testing background, and all testers should have a development background.

While reading some scheme paper (I can't recall which one), there was a quote by Richard Feynman where he said, "What I cannot build, I do not understand".  I think this is really something that has to be understood.  It is in fact why I studied Computer Science and not Electrical Engineering.  A long time ago I read a saying describing, in a nutshell, the difference between scientists and engineers: "Scientists build in order to learn.  Engineers learn in order to build".  By that criterion, I am most definitely a scientist.  I want to build things so that I understand them.  My end goal is not actually whatever I built, but what I learned.

So getting back into driver development will help me understand better how operating systems work.  Creating my own language will help me better understand the theory of computation.  Implementation is, in my eyes, a necessary evil; a means to an end, but not the end itself.  Without rolling up your sleeves and getting your hands dirty, you won't really understand something.

This is also the key to Buddhism.  People are often surprised when I tell them Buddhism is not a religion or a philosophy.  To the western mind, this doesn't seem possible.  So people ask me if Buddhism eschews beliefs, is not a religion, and also distrusts concepts ( and is thus not a philosophy) what could Buddhism possibly be? When I say that a true Buddhist is a mystic, most are truly confused.  What is a mystic you ask?  A mystic trusts only his experience and awareness.

A Buddhist doesn't ruminate, or contemplate.  The only way to "know" is to be aware.  It is the acting of "being", and simply being conscious of this moment.  The only way to "know" life is to 100% fully be in it.  One doesn't "get" life by simply regurgitating what prior masters said.  The only way to know life is to roll up your sleeves and live it.  It is not to be gained through mental fortitude, nor steadfast belief.  This is no different than saying one "knows" math by reading a book on it.

So although I am nervous about dealing with customers again (that's definitely one nice thing about being in Test, we don't deal with clients directly), this is really something I needed to do.  I am looking forward to digging deeper, and being able to say, "what I have built, I understand".

Saturday, May 5, 2012

How do you make a language? Good question...

So here I am, trying to figure out how to make my own scheme-like language, but I am not really sure where to start.  I actually never took compiler theory in school, but Automata Theory was a required class.  But even though I did take automata theory, that was many moons ago, and I have since forgotten a lot about it.  I mean, what DOES it take to create a language anyway?

How much do I need to know about lexers and tokenizers?  Does my language have to be understood by a LALR parser?  An LL parser?  And how do I make a parser anyway?  Do I have to use something like YACC and Bison, or maybe ANTLR?  What about my EBNF forms, how do I know they are complete?  Does the language have to be a context free grammar?  What does context free mean anyway?  Is a context free language different from a regular language?

And these are really more just grammar production and syntax questions.  What about creating control forms, concurrency support, tail-call optimization, etc etc that are features of the language?  Where does one even begin when trying to design a language?

That's why I've decided to look at the R6RS scheme reference as a starting point.  This language (which I am thinking of calling Shi, is the Chinese transliteration for the Pali word Vijnana which very loosely translates to "mind") won't actually be a scheme per se, but it will be scheme-like, just as clojure isn't exactly a lisp or scheme (however, the more I learn about scheme, the more clojure seems like a scheme, since it is a lisp-1 and it is more functional in nature).

However, that will only get me so far.  For example, what other features in the language need to be implemented?  What exactly is the goal of this language?  So I decided to list down some of the things I wanted to implement:


  1. Persistent data structures
  2. Lazy evaluation by default
  3. Dynamically typed by default, but with type hints
  4. Tail call optimized
  5. JIT'ed
  6. Support for continuation passing style
  7. Support for C FFI
  8. Some kind of concurrency support (debating between STM and message passing)
  9. Garbage collected
  10. lisp1 style lexically scoped with one namespace
  11. Hygienic macros only

Some features I'd eventually like to implement (but probably in libraries)

And that's for starters.  The design decisions above will impact the implementation.  From what I've read about LLVM so far, it looks like the LLVM IR will give me support for #5, #7 and #9 above.  It will also provide #4, but only on x86(_64) and PowerPC (but not the ARM...dammit).  #3 will be interesting, I'll have to think about how to do this (I'll probably peek at the Clojure source code, and see how they do this, as I think it's a pretty cool feature).  The immutable data structures will have to be provided at a somewhat higher level.  Although LLVM provides primitives for immutability, this is different from actually implementing persistent data structures.  Often, red-black trees are used to create associative arrays for example, but I still have to figure out how to make the structure persistent.

Even just figuring out what goes into making a programming language has been pretty fascinating so far, and I have only scratched the surface.  Ultimately, what fascinates me is the theory of computation itself, and I hope that creating my own language based on scheme will give me a greater insight into the lambda calculus and computation itself.

How does scheme do tail call optimization?

Last night, I was curious how to implement tail-call optimization for Shi (the language I am going to work on).  I was curious how current scheme implementations did this.  Since many schemes are implemented in C how do you do tail call optimization if C itself doesn't do tail call optimization?

But first, what does TCO really do anyway?  And why do so many C(++) programmers lambast functional style recursion?  I have to laugh when FW engineers at my company poo-poo recursion.  Unfortunately, engineers who are not familiar with other languages aren't even aware that it's not recursion that is at fault, but the lack of sophistication of the C(++) compiler.

The unenlightened think that all languages suffer stack overflows.  This is not true however.  In brief, when a function call is made, a stack frame is allocated on the call stack, and the call stack has a limited amount of memory.  One of the duties of a stack frame is to provide a return address so that as one function call completes, the stack frame is popped off, and the program can return to where the execution was left off.  So in recursion without TCO, a new stack frame is allocated for every function call, and this is why you can "blow the stack".  This is one reason why C(++) programmers (claim) recursion is so bad.  In truth, I think most imperative style language programmers simply don't want to wrap their brains around recursion (or if you think lazily, induction) .  The other mythical reason imperative style programmers claim recursion is bad is because procedure calls are expensive (because you have to push a new frame onto the call stack).  This too is erroneous.

They are myths, however they are correct from a C(++) point of view.  But again, don't make the mistake that recursion or function calls themselves are bad which may dissuade open-minded programmers from attempting to learn functional style programming if they only listen to their unenlightened peers.  For example, the D programming language DOES do tail call optimization.  But how did these myths come about in the first place?

In one part of the important "Lambda Papers" by the legendary Guy Steele called somewhat verbosely, "Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO", Steele explains why function calls are not expensive as believed.  As the title somewhat indicates, this gives a historical account over how function calls got a bad rap in the first place and gives an interesting perspective on the attitudes towards GOTO (from way back in the day).  I recommend people read this, as it is an interesting account.  But germane to this discussion, Steele debunks why function calls are considered "expensive".  Steele basically points out three things:

1) GOTO statements are "universal" control flow statements
2) GOTO's are cheap, because in machine code, they are just a branch or a jump (as opposed to a switch or a case for example which becomes many machine code ops
3) Procedure calls are in essence GOTOs that can pass in arguments

Given the above three, function calls are therefore "cheap" and also become control flow in their own right.  Interestingly, the paper mentions that stack space does not have to be consumed when using this "GOTO" method for tail recursion when lexical scoping is used (as opposed to dynamic scoping as in lisp).  I presume this is because when variables are lexically scoped, the stack frame itself carries the reference(s) to the variables, as opposed to having variables being passed around dynamically.  I could be mistaken on this point though.

So, TCO makes it possible to not consume a stack frame on every function call.  And although the paper hints at how to do this from an assembly point of view, that still didn't explain how schemes that use C as the Intermediate Represenation performs TCO, since C itself can't do TCO.  At first, I thought maybe they used setjmp/longjmp to save off the stack frame, and then on the recursive call use longjmp to go back.  The problem is the unwinding of the stack frames (which may point to no longer valid frames).  Still, it seems at least possible to do it this way.

I then came across something called a "trampoline", which I recognized somewhat from Clojure.  A trampoline in clojure can be used for mutual recursion, but the trampoline described here  is a function which "jumps" to other functions.  Also after reading this, I came across an abstract discussing how to use the heap to perform tail call recursions.

This is all pretty fascinating, and I wish I had read the lambda papers before.  I am even starting to read the legendary SICP book so that I can better understand Scheme.  I guess Clojure was kind of like the gateway drug into functional programming and lisps :)  It's even made me look a little at haskell....but first, I want to get Shi rolling.

Monday, April 30, 2012

Making a Scheme...the grand plan

Man I hope I don't jinx myself, but there's a pretty good chance I might be working in a new position soon and I won't be an SDET anymore.  Lest Mr. Murphy come and visit me, I won't say what I might be doing until all the i's are dotted, and all the t's crossed.  But I will say that I will be doing a lot more low-level coding again.

So, it looks like once again, I'll be switching focus on what I'll be doing for my hobby time.  I'll be getting knee deep into linux internals again and need to brush up on my C.  I'm also going to spend a little more time looking at the Minix source code.  But especially, I'll be looking at LLVM and Scheme, specifically PLT Racket.  Why all of this?  And isn't Scheme a higher level language?

Let me start with why I am looking at Scheme now.  Although Clojure is a pretty cool language (especially the Software Transactional Memory, which I haven't seen an equivalent of in any other Lisps), it's still in Virtual Machine land.  And unfortunately, Java isn't all that great at interfacing with low-level C shared libraries or OS API's.  That's where Scheme comes in.  There are a couple of flavors of Scheme out there that have the ability translate into C code (for example Gambit Scheme and Chicken Scheme).  And although both of those Schemes look kind of cool, I am currently looking at PLT Racket (formerly known as PLT Scheme).

I'm even reading a bit of the R6RS standard, in order to help me wrap my head around Scheme a little better (Clojure, though a relative of lisp, seems to me to be neither truly a Common Lisp nor Scheme derivative...in essence, it seems to be its own branch on the lisp family tree along with language like Shen).  Although python is nice, and it is pretty easy, I want to learn a new higher level language that will let me interface with C more easily.  Scheme fits this bill, but it does have a drawback that I mentioned above....no easy concurrency support (and yeah, I have read that continuations can be used for a kind of parallelism, but I am not sure it can be used for concurrency).

So this is where LLVM fits in.  LLVM is a set of libraries which provides a front end (convert source code to the AST), optimizer and code generator (to generate the actual binary machine code for a specific architecture)  for a programming language.  LLVM is an interesting project, as it aims to help people write compilers, interpreters, or even JIT/VM's.  It can do so by providing a "universal" Interface Representation called the LLVM IR (I like to think of it as a universal assembly).  One could even create a language (it has lexers, scanners and parsers as well) with it.  And LLVM makes it trivial to call into the C ABI.

Do you see where I am heading with this?

R6RS Scheme standard
LLVM to create a JIT'ed language that can easily interface to C

There's one last piece of the puzzle...concurrency support.  Right now, the rage seems to be Erlang style message passing (Actors) for concurrency, but STM is gaining a lot of traction (Scala is implementing not one, but three STM libraries, haskell has 2 STM implementations, and pypy is experimenting with STM support).  I found a pretty interesting article by a C++ guru Bartosz Milewski on STM, including a link to an academic paper on Transactional Locking II.

Previously, I had toyed with the idea of implementing Clojure style syntax in D.  But the more I think about it, I realized it failed to satisfy several goals:

1) The stable dmd compiler only supports x86.  I want to work with  ARM processors
2) The front end to dmd2 is not open source
3) The LLVM based compiler ldc is not stable for D and is lagging behind
4) While it would make adding support for D modules easy, my real goal is support for C libraries
5) I would have to master D syntax and lisp/scheme

So I decided to something even harder:
1) Learn LLVM, including garbage collection and JIT byte generation
2) Learn how to make a lisp reader (it will be a LLVM based app )
3) Figure out how to implement STM in all of this
4) Slowly add in R6RS requirements (but not all)


Yeah yeah...I only have so much hobby time.  But creating a language is really something I've wanted to do for a LOOONG time.  I am getting close to paying off all my debts, which means I will be able to go and start on my Master's degree fairly soon.  And I really want to be able to design my own language.  Yup, I know, it's a dream of a lot of people, but I still think it would be cool.  But I also want it to be practical.

Some of the things that appealed to me about Clojure was that:
1) The syntax seemed easier for me to read that other lisps and schemes (more than just parens)
2) The appeal of built-in concurrent programming via STM
3) More scheme-like functional approach (immutability as the default for example)
4) Supports both a JIT bytecode generation, and a AOT compiled mode

So I definitely want to keep these in the toy language I will create.  But of course, there's one big hole in Clojure...good C/C++ library support.  Where Clojure can easily interface with Java, I want this language to easily interface with C (and ideally in both directions...but that might be too hard to support for now).  As a consequence, I want the language to support both a JIT'ed and AOT compiled (native) mode.

This is obviously a monumental task.  But I think it will force me to become a better Computer Scientist.  I will have to get better at many of the fundamentals of CS.  And yes, you DO need the things you learn about in school at work (choosing the right data structures, understanding complexity analysis to find poor algorithms, the ability to prove your solution is correct, etc etc).  Moreover, I am a firm believer that having learned several languages, and more importantly, different styles of programming has made me a better programmer.  When I was interviewing for my new position (at my own company), one of the interviewers noticed that I had Clojure down on my resume.  He seemed impressed and curious at the same time (he had heard of Clojure, and did a tiny bit of elisp, but thought lisp was too hard).

My feeling is that all the naysayers of lambdas in the upcoming Java 8 will eventually see their usefulness, instead of just decrying them as a "me too" feature for Java to catch up with C# on the bullet point list.  But, much to my surprise, many engineers are loathe to change.  I guess that's just one reason I consider myself a scientist rather than an engineer (engineers after all want stability, but scientists, in their quest for truth must be willing to give up the old in order to learn the new).

Sunday, April 1, 2012

A Testing Manifesto for Hardware companies Part 1

I was looking through some of my posts and was looking at some of the "drafts" that I never published.  I thought I had published this one earlier, but apparently not.  I wrote this draft about a year ago, but I thought it should see the light of day :)

...

I've been an SDET at my company now for about 3.5 years, and I've either seen other companies or divisions and their test strategies, or have talked to other SDETs and Test Engineers from other companies and received an idea of what their company's old test strategies were like.  I have since come to several conclusions regarding how testing is done, and how it should be done.  What I will write here is primarily of interest to the managers of Testing departments in hardware oriented companies as well as Test Architects, but engineers in the Test department should also find some use of what I shall say.

First off, let me begin with what testing should be:

  1. Even Hardware-centric organizations require enterprise techniques
  2. Hardware-centric organizations need to use fundamental tenets of good software engineering
  3. Use new but mature technologies suited for the task at hand
  4. Test Engineers are "true" engineers and should be treated as such
  5. Managers (Test and Development) need to understand what testing requires
  6. Don't mix up white box, black box, and acceptance testing
  7. Test Engineers and Developers have to work hand in hand
  8. Unit tests should be written by the developers (no "dev test")
  9. Requirements gathering should be an ongoing process
  10. Continuous Integration and Deployment is a must


Is your department not exhibiting some or all of these?  Perhaps you don't understand some of what I am talking about?  Or maybe (gasp), you think even if your department isn't exhibiting one or more of these traits, that it isn't important?  So, let me go into a little more detail into each of these issues, and explain why not following the above is harmful to your organization.  Then I will discuss just a few ideas on how to make sure your group is following the above.

Hardware oriented organizations don't understand enterprise level computing

Ok, I know, "enterprise" computing itself doesn't have a definition that's exactly entrenched in stone.  But if your SCM or Process group doesn't understand "Software as a service", "distributed computing", or web service technologies (or even doesn't understand remote services or remote procedure calls), then I submit that your organization doesn't understand enterprise level computing.  When  I say enterprise, I don't necessarily mean high-volume, high-transaction computing environments, but I do mean remote, distributed computing, with at least some level of persistence and data tracking/mining/relationships.

Even should you understand what enterprise computing is about, how would this benefit the test department?  Think about what a test group does.  It creates tests which are designed to expose defects in the hardware (or the software that controls the hardware).  There are many hidden assumption in this seemingly easy enough responsibility.  You should immediately think of the following  aspects to this:

  1. How are you reporting the results? (are you able to do data mining on results of tests?)
  2. How are users finding the test tools they need? (given a test case, is there a test tool for it?)
  3. How are users installing the test tool? (if they found the test tool, how do they install it?)
  4. How is a user supposed to know how to run the tool? ( Do you maintain elaborate documentation?  What arguments are supposed to be passed in for Test Case A versus Test Case B?)
  5. How are you discovering systems that can run tests? (Can you find systems not in use programmatically, so that you are executing 24/7?)
If your organization isn't linking TestCases to test tools, then I submit that you are in chaos.  If your organization isn't storing results of test runs automatically somewhere (hopefully a database of some sort), then you are missing great opportunities.   Being able to know what features in the hardware or software are associated with what test tool (and any other metadata required) is absolutely essential.  Think about what happens if you don't have this linkage.

Tester- "Hey Sean, is there a script for this test case I got assigned?"
Test Engineer- "What test case is that?"
Tester- "Ummm, let me see, it's ID 00716459"
Test Engineer- "Oh, that's the one to make sure the ioctl in the driver doesn't time out right?"
Tester- "yeah, but is there a program or tool for that?"
Test Engineer-"Yeah there is, let me go find the script on the common share drive"
Tester-"I already kind of looked there..."
Test Engineer-"Did you look under the Sean folder?"
Tester-"Yeah, but there were a couple of scripts that had similar names"
Test Engineer-"Oh yeah...you have to use the one with -version.1.3.5 in it"
Tester-"Oh ok."
Test Engineer-"And did you make sure you installed all the prerequisites on your test machine?"
Tester-"such as?"
Test Engineer-"Well, first you have to install..."


Clojure and OSGi/modularity?

Now that I am going full on with OSGi (and to a lesser degree, bone up on Java Servlets and maybe JavaServer Faces), I've been wondering about how Clojure will handle modularity.  Looking at the dev site for Clojure, there is a mention that Clojure should work with OSGi, and indeed, the Counter Clockwise plugin for Eclipse already must to some degree, but I noticed a few odd tidbits.

For example, if you look at the clojure dev site where it mentions modularity, it has an outdated reference to Project Jigsaw's scope (see here for a more recent coverage of Jigsaw's scope).  According to several sources, Project Jigsaw is not just an OpenJDK endeavor, but will be in the official Oracle release as well.  And some people are saying that the modularity aspect (combined with the long-awaited closures) will make Java 8 the most revolutionary release of Java to date.

Since Clojure's documentation is out of date, I wonder if the clojuremeister's are planning for how this new Java 8 modularity aspect might impact Clojure itself.  The good news is that if Clojure can support OSGi (and it kind of does thanks to clojure.osgi), then it should work with Java 8, since Jigsaw is supposed to be backwards compatible with OSGi.

What I would love to do is make several bundles in Clojure, and then let the framework take care of it.  I suppose I could just do an AOT compile of Clojure into Java, but the really killer app to me is if I could somehow embed a Clojure REPL as a bundle.  I might have to take a peak at CounterClockWise to see how they do it.  Hmmmm, a clojure NREPL, talking over websockets from a webpage....

All fancy ideas aside, all of this learning always comes at the expense of my "hobby time".  I suppose I should be thankful I have hobby time, but still, I'd prefer learning some other things.  I haven't done any clojure programming in awhile, and I don't want it to get rusty.  Heck, I never even got to practice a couple of things in Clojure (like multimethods or macros).  With regular POJOs, I can just take a java jar, and play with it in clojure.  But with OSGi bundles, I'm not quite sure how I can do that, since the bundle has to run inside the framework.

Anyway, something to think about.  I definitely want a "scripting" language for the stuff I'm doing (man I hate that word, and all the connotations it brings).  So if time permits, I'll see if I can eventually put up a blog post about writing an OSGi bundle in clojure.

Saturday, March 31, 2012

Going to learn OSGi...see ya later clojure and D

So after yet another disappointing realization that I don't have the skills that most employers want, I have decided to forego brushing up on Clojure, or learning D...or OpenGL.  Sigh.

Unfortunately, I work in a kind of half-way world where my skills don't seem to be valued.  I am not a firmware developer or device driver developer (anymore), and I don't have enterprise skills that most other businesses want (my SQL knowledge is very basic and I know very little about web development either back end or front end).

So for the last 3 weeks or so, I have been learning OSGi.  I have made an Eclipse plugin before, but that was mostly just trial and error because the documentation was so bad (even the SWT documentation was pretty bad, but I somehow managed to make a JFace based plugin).  So this is the first time that I've really dived into OSGi, and I dived into it because of something I've been seeing at my workplace: non-modularity.

Spaghetti code has of course, a bad connotation to it.  But I think some of its original meaning has been lost.  Spaghetti code is code in which codepaths and/or dependencies get so interwoven that it is no longer feasible or possible to change one thing without that change cascading into a lot of other code.  When you have non-modular code you almost by definition have spaghetti code.  I have seen first hand now what happens when you only need some classes for a project, but because those classes have dependencies on other classes, and those classes on other classes (ad nauseum), you get into a monolithic all-or-nothing scenario.

I decided to learn OSGi precisely to combat this problem (despite the offending non-modular platform not being in Java).  What I am discovering is that it is harder to design applications than it is to code them.  Admittedly, I'm getting a bit stuck on Services at the moment, but to my mind, the hardest part is just structuring your application into modules in the first place.  I am also hoping that when Java 8 comes out along with project Jigsaw, it won't be too much of a transition for me to use a more modular approach to programming.  Also, I can finally (hopefully) figure out what Inversion of Control and Dependency Injection are.  I've already been looking at iPOJO in felix, and it seems interesting, though a bit confusing.  Component based frameworks have a lot in common with modularity, and they seem to work well together.

If there's one thing I've discovered being a SDET for the last 3.5 years, it's that there is definitely a need for engineers who can bridge the hardware/software/enterprise gap.  It astounds me that the SCM team doesn't understand what "Software as a Service" is.  Or what Service Oriented Architecture is.  Hell, even just trying to show what distributed computing is was a challenge.  I have had an opportunity now to see several organizations "Test Automation Frameworks", and to say that they were....to put it politely...behind the times is an understatement.  State of the art to many test groups is to just have a pile of scripts written in a ton of languages with no means to tie together Requirements -> TestCases -> Scripts -> automated installation -> automatic argument passing.  They just pass on by tribal knowledge that Script A is for TestCase B, and that you have to install a whole bunch of software on the system under test.  Oh, and the tester will have to manually fill in all those annoying little things, like when they started the test, when it ended, and if it passed or failed (not to mention manually uploading log files or other useful debugging info).

Being an SDET in a hardware company is a bit challenging, because I have to not just provide an automation framework, but an ad-hoc one too.  While the automation aspect mostly just deals with the enterprise side of computing, the ad-hoc tool creation requires understanding hardware, firmware, and drivers.  If you talk to many Computer Science grads today, talk about Interrupts, Stack frames, logic analyzers, IOCTL's, SERDES or the finer points of memory allocation....and they will probably just gaze at you with blank eyes.  The majority of CS majors I have met only took cursory C or C++ classes, and took neither Logic Design nor Microcontrollers.  Conversely, Electrical Engineers don't understand the principles of good software engineering, and often have to be dragged into using Revision Control, given an explanation why writing 1000 line functions is not a good idea (inline them if you are worried about performance), that creating unit tests is helpful, and told that copying and pasting code is usually a really bad idea.

I definitely think there is a place for people like me.  I'm not a device driver guru, but I have done it in the past.  Ditto for embedded firmware (I used to write assembly a little even).  I'm not a SQL master, but I can create simple tables with constraints.  I don't understand load balancing for web servers, but I can write a simple servlet for Jetty to enable web services.  But I think the most important skill I have of all is figuring out the technology that should be used.  Maybe because I'm not a master at any one thing, but I know a broader swath of technologies than even many engineers with 20 years of experience who has been specializing, I know about many ways to tackle a problem.

Perhaps one day, an employer will discover that it's not so much about the skills you have now, it's about how quickly you can adapt and learn, and how well an engineer can integrate all of his skills and knowledge together.

Sunday, February 19, 2012

Laziness in clojure. Or "Why didn't I see it was just induction?"

Ever have one of those "A-ha!" moments of clarity?  One of those, "ohh, so that's what Eureka means" events of enlightenment?

So people who have been doing functional programming for awhile, and those more astute than I will probably go, "well, yeah...duh", but I just had a great realization that creating lazy sequences is really just the inverse of recursion.  And what's the inverse of recursion?  Induction.

So, I'm not going to go into what induction is exactly (whether it be mathematically speaking from a weak or strong perspective).

I decided to use a function I've written recursively several times that is of practical benefit (kinda) and turn it into a lazy sequence instead.  Basically this is an interest calculator, given a certain starting principal, an amount that is saved (per year), and an interest.  For my recursive solution, I also give a number of years, and in this case, I walk up to the number of years (starting with year 1).  But for a lazy sequence, this isn't necessary.  This is because, like induction, you don't try to hit the base case.

First, let's look at the recursive solution, then compare it to the lazy one.

(defn savings
  [ base saving interest years ]
  (loop [ b base
          y years ]
    (if (= y 0)
      b
      (do
        (println "year =" y "base = " b)
        (recur (* (+ b saving) interest) (dec y))))))

I did add a little print here so you can see how the interest rate accumulates.  But this is your basic tail-call recursive solution to a problem like this.  You would call it like this:

(savings 0 4000 1.07 10)


Which would calculate how much you would save in 10 years with a starting principal of 0 dollars, saving 4000 a year, and at a 1.07% total interest (yeah, pretty good I admit) for a total of 10 years.

But what about a lazy sequence solution?  And why would you want a lazy solution anyway?  Let's look at the lazy solution first.  In some ways, it is easier.


(defn lazy-savings
  [ base savings interest ]
  (let [acc (* (+ savings base) interest)
        b (+ base acc savings) ]
    (lazy-seq
     (cons b (lazy-savings b savings interest)))))

Notice the call to lazy-seq at the last line from the bottom.  This is what replaces the recursive (in this case implicit recursion...notice I didn't use recur here) call with a lazy one.  Without the recursion being wrapped in lazy-seq, there would be an eager evaluation, meaning that lazy-savings would be called again.  And since there is no base-case as there is with the savings function above, it would eventually blow the call stack.

But why is this solution nicer?  Because it allows you to do more interesting things with it.  Whereas the function savings only returns a single value, lazy-savings returns a seq of values.  Let's call it with the following:


user=> (take 10 (lazy-savings 0 4000 0.07))

You should get this:



(notice the interest rate is only .07 rather than 1.07)

See how you get all the values?  If you wanted the final value, you could have called it with last.

(last (take 10 (lazy-savings 0 4000 0.07)))

Or if you wanted the 5th year, you could have either used (last (take 5 (lazy-savings 0 4000 0.07))) or you could have have done (nth (lazy-savings 0 4000 0.07) 5 ).  In other words, having a lazily generated sequence is more powerful than the recursive version.


So the takeaway is two parts:
1) prefer laziness when possible
2) wrap the recursive part in lazy-seq, and don't worry about ending at the base case

Perhaps the last part bears a little more discussion.  Recursion and induction are really two ways of looking at the same problem.  Both involve a base case...a known condition.  Generally, in an induction, you start with the base case and move forward, showing that if F(n-1) is true, then F(n) is also true (apologies to the math purists, as I can't recall if this is weak or strong induction).  With recursion, you normally "walk backwards" to the base case.  If for example you know that F(0) = 1, and you are solving for F(10), then you apply F(n-1) until you hit F(0) combining all the intermediate results.

Saturday, February 4, 2012

Implementing clojure...in D?

Well, if you read my last few posts, you know I've been looking at a system's programming language called D.  This is kind of a jack-of-all trades programming language, but what I find interesting about D is many of the features that you don't see in other systems programming languages in the C family (C/C++/ObjectiveC).  For example:

*lambdas
closures
nested functions
**const and immutable  (this is actually more secure than Java's final)
tail call optimization (Java might get this in Java 8)
***concurrency support (though no MVCC STM)
garbage collection
lazy evaluation of function args
true float, complex and imaginary numbers (ok, this is in other C family languages)

* C++11x does have support for lambdas, not sure about closures
**C++'s const is a huge confusing pain. D's seems a little simplified
***C++11x concurrency support appears (on my cursory examination) to consist of old-school locks and mutexes albeit in a portable language native fashion

There's also work on a LLVM compiler for D.  This got me to thinking that it might be feasible to implement Clojure in D.  Having LLVM support would enable a JIT compiler for D code, just as clojure emits bytecode on the fly for the JVM.  Having true TCO, a more bare metal approach, imaginary number support, real floating support, and even safer immutability might even give it a leg up on Clojure itself...just as PyPy is even faster than canonical cpython.  Implementing Clojure in C or C++ would be much harder I think, due to those languages lack of certain features.

Now first off, I will be the first to admit that I don't have the brain power to begin such a project, though if someone else took up the mantle, I would gladly help.  I simply don't know enough about compilers, automata theory, grammars, AST, lexers, parsers, scanners, etc to go about creating my own language.  It's always been a dream of mine...but I just don't have enough knowledge on the subject to go about doing it.  When I finally pay off all my original school loans and finally try to get my Master's degree, I'll think about this as a project.  But for now, it's just a nice fantasy.

You might be asking why I don't think clojure, as-is, is good enough.  While high-level languages are great for building applications that essentially just get, manipulate and update data in one form or another, when you have to get to the metal and talk to the OS, they really are not all that hot.  When you have need to get at drivers or system information, high level languages like Java or C# (or even languages like python or ruby) will leave you feeling frustrated.  Since I work as an SDET for a company that builds SAS controllers, I routinely have to deal with low-level issues at the driver level (or even firmware level...of course at that level, you're pretty much stuck with C/C++ or assembly).

While tools like SWIG or jnaerator helps, it leaves a lot to be desired.  I would LOVE to have a language with the expressive power and flexibility of Clojure, but with the ability to do low level calls with our many C/C++ libraries.   Yes, I am aware of JNA, bridj, and SWIG.  I've even played with HawtJNI a little.  While they are nice, dealing with callbacks or going the opposite direction (from C calling Java) is problematic.  That's why hand rolling JNI code, despite its difficulty, is in some ways still the best option.

Now admittedly, D doesn't natively understand header files, so it won't be a drop in replacement.  But since D understands the same data types, it doesn't look like too much of a stretch to convert header files to D (though admittedly, tedious).  For example, Java's lack of unsigned data types kills me when I do JNI (not to mention how much of a pain it is).  Python's ctypes is probably the easiest of the high-level languages to muck around with C shared libraries, but it is of course slow (though PyPy is helping in that area enormously).


This idea really has my brain itching, and I wish I knew more about how to get started (not to mention have the time to do it).  Not only would I have to learn all the aforementioned things about automata and compiler theory, I would have to basically become a guru in D and Clojure.  I've only scratched the surface of Clojure (I still haven't played with protocols or multimethods, and I've only made one toy macro).  And I am just now starting to learn D, and I can't wait for the book by Alexandrescu to arrive.


UPDATE: A thought just occurred to me.  I could start writing some of the persistent data structures in D, like clojure's persistent data structures.  This is something I could probably do now, and it would help solidify my understanding of data structures again.  So I am going to go about creating D versions of persistent maps, lists, etc.  I'll have to think about the sequence interface, and how I would implement that in D.  A wrinkle is that Andrei wrote about the disadvantages of only doing forward iterative algorithms (like clojure's seqs).  But I did see some examples in D of creating lazy containers, so at least I know it's possible to implement.

For reference, I will be looking at the clojure source code, and these:

Videocast of Rich Hickey on data structures
MIT's OpenCourseWare class that has a section on persistent structures
Andrei's article on using ranges instead of iterators






Friday, February 3, 2012

Using emacs and leiningen on Windows for clojure pt. 3: Getting SLIME'd

jinspector.clj In the last post, I made a booboo when I made the dependencies for our project.  The logback groupID is actually ch.qos.logback, not just qos.logback.  So make sure you change your project.clj file accordingly.


(defproject MyFirstCljProject "1.0.0-SNAPSHOT"
  :description "FIXME: write description"
  :dependencies [[org.clojure/clojure "1.3.0"]
                 [org.jboss.netty/netty "3.2.6.Final"]
                 [ch.qos.logback/logback-core "0.9.30"]
                 [ch.qos.logback/logback-classic "0.9.30"]
                 [org.slf4j/slf4j-api "1.6.3"]])


Let's add our first source file.  Notice in your MyFirstCljProject, there's a src directory, and under that, there is a MyFirstCljProject directory.  Leiningen is kind of like Maven in that the src directory is the root folder for your package.  Leiningen, by default through the new command, created your top level MyFirstCljProject directory.  This is the first level element to your clojure package.  Still confused?

Let's say you wanted to create a namespace like:

MyFirstCljProject.jinspector

That means you would find a  ../src/MyFirstCljProject/jinspector.clj file.  As another example, imagine you wanted a namespace of tools.networking.netty-client.  When you have - in the namespace, you MUST have an underscore in the actual name of the file.  So you would have a folder path of:

../src/tools/networking/netty_client.clj.

So now that we've got some namespacing under our belt, let's actually write that file.  Let's create our own version of a Java class inspector.  There is a clojure-contrib.repl-utils which does what I'll do here, but this is just for illustrative purposes.  Open a file in emacs by using C-x C-f, and opening the C:/users/your_name/MyFirstCljProject/src/MyFirstCljProject/jinspector.clj


(ns MyFirstCljProject.jinspector)


(defn inspect
  [ x ]
  (let [cl (class? x) x (.getClass x)
        fields (.getFields cl)
        pfields (.getDeclaredFields cl)
        methods (.getMethods cl)
        pmethods (.getDeclaredMethods cl)
        p (.getSuperclass cl) ]
    (doseq [ x (concat fields members) ]
      (println x))))

Now that we have our method, let's actually try and use it...from SLIME!  The first thing we have to do is actually start our project.  Now, I've had some trouble using the swank-clojure command, 'clojure-jack-in'.  This command is supposed to allow you to automatically connect to a leiningen project with a slime interface.  Unfortunately, I haven't been able to get it to work.

However, you can start it manually.  While your jinspector.clj buffer is the active one, do a M-x elein-run-task, and when it asks you for the task, enter 'swank'.   Once you do that, you should have something like this:


Notice how the new buffer says "Connection opened on localhost port 4005"?  That's our queue that we can use slime to connect to the swank-clojure plugin.  You can do that with the M-x slime-connect command.  It will ask you for an IP address (use the default 127.0.0.1...that's your local machine), and also hit enter again to use the default port of 4005.  Once you do that, it should look like this:


Now we can begin playing with the clojure REPL!!


Looking at D language again.

Seriously, I need to stop reading so much on the web.  I've got the attention span of a 5yr old in a candy store when it comes to programming languages.  Somehow, I don't know how, I stumbled upon GLFW, which is yet another library aiming to help with OpenGL (like libSDL or freeglut or SFML).  It turns out it natively supports D, and that got me to thinking.   One thing led to another, and I came across an interesting article on concu rrency support in D by the renowned author and C++ guru Andrei Alexandrescu.

Another interesting aspect I discovered is the immutable support in D, and how it seems to mirror Clojure's default immutability (although in D, things aren't immutable by default...kind of like Scala, and that lack of default of default immutability has led people to call Scala not truly functional).  However, while Clojure uses Software Transactional Memory as its way of avoiding ugly locks to memory access, D is using a different approach to its memory model.  For example, by default, threads do not share data, and this has to be explicitly made so.  It seems that D is using the more tried and true message-passing approach made most popular by erlang. It also appears that at least for the PyPy developers, that STM may able to remove the GIL in Python, so I wonder if the message passing model is the best to use, given that clojure is using STM, and even Scala is adding a STM approach to concurrency above and beyond their actor model.

So now I am all the more interested in D.  It's basically a more modern alternative to C/C++...even C++11x.  While C++11x does natively support threads, it still seems to be using the traditional, "lock memory access with mutexes" approach.  Templates seem easier, and of course, there's built in garbage collection.

Since I am just starting, I am seriously considering using D and GLFW instead of C++ and SDL.  SDL does look more advanced than GLFW however.  Plus, as I mentioned before, I don't want my C++ skills to rust away.  But D just looks a LOT nicer than C++ now. 

Tuesday, January 31, 2012

C++11x kinda neat but...

So I have been futzing around with porting some of the OpenGL SuperBible examples to use SDL instead of GLUT.  It turns out the new SDL 1.3 is using the zlib license which is much more permissive, and it also can create OpenGL 3.x+ GL contexts.

So I started writing some C++ wrappers to help instantiate SDL and create a GL context for me.  I forgot how atrocious C++ is.  I haven't seriously written any "real" C++ code in about 4 years (I wrote some C++ which basically looked like C, but I didn't use Templates, namespaces, or classes).  I even forgot how to pass in arguments to the parent class constructor.  And I am aleady dreading the whole Copy Constructor vs. = operator overloading, using smart pointers, etc etc.

But the thing that really confuses me?  Makefiles.  I hate them, so I've been looking for an alternative.  I found waf a long time ago, and I think I'll use it, but it looks like it will have a pretty steep learning curve too.  But what I like about it is that it is more generic.  It isn't JUST a C/C++ build/configuration tool.  It's kind of interesting how the waf author handles dependencies.  Although obviously he had to use a DAG to traverse dependencies, the python code he used to automatically determine these is pretty cool, and I wish I had thought of that for the code I did at work.

So I've already begun the tedious process of learning waf, in addition to re-learning C++, the C++11x additions, libSDL and of course OpenGL.  Oh, did I mention that the game logic is in Clojure and will do low-level network communication with Netty (on the clojure side) to a boost.asio asynchronous network?  To say that I have a lot to learn is a huge understatement.

Speaking of C++11x, some of the features seem interesting, but some of them I also just don't understand.  I'm still trying to wrap my head around the ability to bind to rvalues.  I am looking forward to auto and decltype, which may help me overcome my fear of Template programming.  But still, I had to wonder...was there a better way?

So I started looking into Google's Go, and the D programming language.  Go seems to be quite a departure from C/C++, eliminating some strongly typed aspects of the language.  The syntax is also a little funkier, but nothing too bad I think.  But I've already heard some people say that it's not exactly true that Go is a System Programming language, but rather a network programming language.  It's also still in a very early stage of development.  D on the other hand seems to be more of a real programming language.  It also seems to do everything C++11x does, including a few that C++11x doesn't.  In fact, the more I looked at D, the more I liked it.  For example, it does automatic garbage collection (Go does too btw), it has an easier to read Template system, it has lambdas and closures, it has built-in complex numbers, built in unit-testing, and a few other nice features.

But...it has one major drawback (Go suffers the same fate).  D does not have a preprocessor and thus does not understand .h header files.  Sure, it can natively call C API libraries..but you have to define the types yourself.  So basically, you have to convert header files into a D module.  uggghh.  I've been through this before with python ctypes, JNA, and bridj.  It's not fun.  And SWIG isn't much better.  With SWIG, you still have to make a .i file, and even if you use a "raw" header, the stuff it outputs is barely human legible.

I'd love to use D, but I'd have to convert libSDL to modules (already done with the Derelict module...but it's using the 1.2 version of SDL which doesn't do OpenGL 3+ contexts, and I don't know what version OpenGL it supports).  Sadly, if a system's programming language isn't compatible with C (and I don't just mean being able to call functions in a library, I mean to understand all the types in a header file too), then it's just not going to find a lot of uptake.

Tuesday, January 24, 2012

Quick update

Wow, long time no post.  Been busy over the holidays so I haven't had a chance to do much coding.  Doesn't help that I bought a ton of old Twilight 2000 pdf's from rpgnow.com to keep me busy reading and reminiscing. Hopefully this week, I'll put up the third post in the Emacs + Clojure + Leiningen series.

I have however lately been thinking about where to spend my free time in programming.  I still want to do some clojure programming, and I will, but the pragmatic side of me also wants to get back into some C/C++ programming.  I've slowly been reading the OpenGL SuperBible book (almost done with Chapter 4), and my original idea was that I would try to port the C++ code to clojure code (via the lwjgl library).  But I realized a couple things...

1)  The graphics side would be slow
2)  I don't want my C/C++ skills to rust away
3)  Might be an opportunity to pick up some C++11x
4)  I won't have to do any mental conversion of the code

The disadvantage of course is that I don't want all of my game to use C++.  I want the graphics side to be in C++ and possibly some OpenCL, but the actual game logic to be in Clojure.  So how am I going to accomplish that?  I could write a whole bunch of JNI so that the C++ code could call Java code.  Another option would be to do some IPC, and have the C++ code shuttle information and make method calls via some kind of RPC mechanism.

I already have a little bit of experience with Netty, and I even started writing my own messaging protocol to be communicated over the socket.  Although Netty was relatively painless, writing TCP socket programming in C/C++ is not fun...not to mention asynchronous or multi-threaded socket programming.  I could use boost's ASIO library, but that in itself looks like a pretty big learning curve.

So I'm still debating what to do.  JNI is tedious and error prone (usually requiring passing NIO byte buffers all over the place), and socket programming isn't fun.  I think I'll go the IPC socket route...but I am kind of dreading it.

But my decision to use C++ will be a time eater.  Not only will I have to refresh my skills (and pick up some of the new C++11x features), there's a ton of new libraries I will have to figure out like libSDL, boost.asio, boost.threads, and of course OpenGL itself.  I also want to learn a new build system called waf.  Makefiles are horrible, and I don't want to have specific IDE project files for each OS.  In a nutshell, waf is a replacement for autotools like functionality, but it is a bit complex.  That being said, I think it's approach seems more reasonable than other tools like cmake or rake.

But, a thousand mile journey begins with the first step.  I've already cloned the latest libSDL 1.3, built it, and make a simple OpenGL context window with it.  This will allow me to use SDL instead of GLUT that the OpenGL SuperBible book uses.  A small step, but a step none the less.