Sunday, May 22, 2011

Why Test Engineers need to be good developers too

It would seem that in many organizations, "automation" has become something of a buzz word.  It's a term that makes upper level managers happy, but which often confound many, including Architects and lower level managers.

In order to achieve automation, especially on products which are GUI based (either as a thick client, or some web based mechanism), there is a tendency to want to use "capture/replay" tools.  In essence, these are tools that are designed to capture mouse and keyboard clicks, save them, and replay it later.  While this sounds great, and I am not against such a tool as part of the overall test strategy, when it is the SOLE method of testing, I am strongly against it.

I have read many sites talk about the dangers of using capture/replay tools, but they usually just harp on the difficulty of maintaining the "scripts" (the file that holds all the captured events).  If the GUI interface changes, then the script becomes useless.  While this is a valid point, it is not in my opinion the most serious shortcoming.  The most serious shortcoming is that it does not rely on any knowledge about how the product works.

Remember that I said Test Engineer, and not Quality Assurance Engineer in the title of this blog.  A QAE could rightfully say that he only does black box testing and should see the system as a customer would see the system.  Test Engineers on the other hand should do some white box testing, and should be familiar of the inner workings of the system being tested.  Why is this beneficial?

If you don't know how it's supposed to work, how are you supposed to know when you have a correct answer or not?  I have often given an analogy that Test Engineers are teachers, and Developers are students.  Test Engineers must grade what the Developers do.  If the Test Engineer himself doesn't know what the answer is, that is akin to a teacher asking the student what the answer should be before the student takes the test.  Letting developers have this level of control over the validity of a product is no different than letting the fox guard the hen house ("oh, that's 'working as designed', we MEANT for it to behave that way").

The second big reason you want Test Engineers to be developers and know how the system works is for first level triage.  If your Test Engineers have no clue how the underlying system works, how can you expect them to debug it?  Is the problem in the driver?  The firmware?  Maybe it's somewhere higher up in the stack, and it is a framework or application they are using.  Or what if it's a low-level physical problem (jitter, noise, etc)?  If all your Test Engineers do is execute tests from a Test Plan, without understanding to at least some level, the interactions of the various components, he will be ineffective at providing first level debugging, and this burden will fall to a developer.  The danger here is that an ineffective defect assignment will waste a developer's time (for example, if the Test Engineer assumed the defect was firmware problem, but it turned out to be a driver problem).


And finally, how is a Test Engineer supposed to come up with new Test Cases or ad-hoc tests if he doesn't understand the system from a low-level perspective.  At best, he can read a spec and determine what should be tested, but he won't necessarily know how to stress a feature, nor will he necessarily understand how to come up with invalid inputs (negative testing).  Being able to "see" the weak points in a product requires knowing how that system works.  Many famous martial arts masters of the past were also great healers of their time, because by understanding how the body worked, they were much better at knowing the weaknesses of the body.  The same logic applies to testing.

To give you a real world example, suppose you want to use sg_utils to send generic SCSI commands.  This is all well and good, but suppose something doesn't go right.  Where was the problem?  If you don't know what a CDB (command descriptor block) is, or how to read a SAS trace, what good will it do you?  At best, you have thrown the problem over the wall to a developer to fix.  This is ok for a QAE, but not a Test Engineer.

And as I mentioned in my previous blog, part of the reason Test Engineers get little respect is because they are seen by developers as little more than testers.  Although Test Driven Development has gained some traction, notice that it focuses on developers writing better unit tests.  The concept of Development Driven Testing has unfortunately not caught on.

Monday, May 16, 2011

Test Engineers don't get any respect

Had I known the stigma of belonging to the Test group of a company, I probably never would have become a Test Engineer.  Now technically, what I do isn't what most Test Engineers do, and although that is my official title, I am closer to what would be called a Software Development Engineer in Test.


What amazes me is the attitude people have in regards to the skill set of Test Engineers.  While I am not a guru in any of these languages, I have written production code in C, C++, Java, C#, Perl, Python, SQL,  and Labview.  I have used (but again, am no master in) a broad array of technologies including SOAP (using Apache CXF), xmlrpc (using Apache ws-xmlrpc), Swing, SWT, some small Eclipse plugins, just to name a few.  So although my knowledge isn't necessarily deep, it is broad. 


When I first got to my new job, I was talking with one of the developers.  I mentioned that at my previous job, I was a developer.  He looked at me quizzically and said, "so you demoted yourself to a Test Engineer?".  I had a puzzled look as well, because I had never been a Test Engineer, nor had I worked for a Test department before. Over time however, I grew to understand why such denigrating attitudes exist, and I will explore them in later blogs.

Nevertheless, I essentially had to prove that I could indeed program and even design relatively large scale projects by myself.  Developers seem aghast when during code reviews I point out things that seem to surprise them.  For example, code paths that they forgot to free a malloc'ed pointer to, or why you shouldn't write 1000 line functions (and god-forbid that they cut-and-paste a function from somewhere else).  They also seem aghast when I counter their feeble argument that breaking down too many functions into sub-functions would cause overhead by telling them to A) profile it and B) inline their functions.

Still, there is some truth to the notion that many Test Engineers are not all that technically savvy, and so I don't blame developers for thinking that at least some Test Engineers are technically deficient.  And this realization has made me determined to NOT be a Test Engineer again.


Just to relate one last story, I knew another Test Engineer who had to call someone at a company in order to purchase a product of theirs that would be utilized by their test framework.  After some discussion, the sales rep basically asked to speak to the developers in charge, or someone who knew what this framework really did.  The great irony was that the test framework was designed and programmed by the very Test Engineer the sales rep was talking to.


Test Engineers....we get no respect.