Thursday, May 30, 2013

The way of test

Here's another gem I found in my posts just in a draft state.  Not sure why I never released this one before, as it was basically complete.



Here are some of my own personal beliefs about testing and the way of test.  I have learned from talking with others who worked in former companies, and it would appear that everyone has different ideas about how to test, what to test, and what constitutes "automation".


1.  Adhoc is superior to automation in finding bugs
2.  Don't let developers tell you how to come up with tests (fox guarding hen house?)
3.  Don't get lost in testing types (unit, integration, functional, etc).  The end goal is to find problems
4.  Test to fail, don't test to pass
5.  Development driven testing is the flip-side to Test Driven Development and just as important
6.  Test programs must follow a protocol
7.  Test programs must be versioned
8.  Test programs must be code reviewed
9.  Test Engineers shouldn't throw failures over the wall to developers without a first triage
10.  Test Cases must be repeatable
11.  Don't get hung up on writing tests before code
12.  Don't treat Test Engineers like testers

1.  Adhoc is superior to automation in finding bugs
Unfortunately, test automation has become a buzz word, and managers think that it will be a panacea to all their testing problems.  But the reality is that automation only applies to a fraction of a test plan, and that automating really only helps you find regressions.  It does save time however, and that time savings should be used to do more adhoc testing.  But trying to automate everything is often a waste of time.  Instead, it is often more advantageous to create a framework that allows for rapid exploratory and adhoc testing in which what was done can be recorded (and thus repeated).

2.  Don't let developers tell you how to come up with tests (fox guarding hen house?)
Sadly, many Test Engineers don't truly understand how the software (firmware or hardware) is supposed to work, and they rely on too much technical information from the developer.  What should happen is that there should be a specification (that both the developers and test engineers read).  The spec is your bible.  By knowing the spec, you know the inputs and outputs, states and transitions of the system.  From this, you don't need a developer to tell you how to test something.  When a test engineer simply parrots back what a developer said his code is doing, that's not testing.

3.  Don't get lost in testing types (unit, integration, functional, etc).  The end goal is to find problems

I see higher level types (managers or architects) falling into this trap due to seeing this in the abstract rather than dealing with testing "in the trenches".  Generally speaking, everybody should unit test.  Anyone who writes code needs to unit test their stuff (including Test Engineers, SDET's or Automation Engineers).  While some QAEs do black box integration tests and Developers should work on Acceptance tests with the customers, Test Engineers tend to be grey or white box functional/integration testers.  But the bottomline is that a test organization tries to produce higher quality more robust and efficient systems by minimizing the number of defects.

4.  Test to fail, don't test to pass
Unfortunately, there is often pressure with a Test department to make sure that a test passes.  Many organizations have rules where only so many defects of a certain severity can exist before being launched, or moving to a new phase.  And rather than risk the wrath of management to introduce another bug, little issues are swept under the rug by not testing certain things.  This can also manifest itself simply by laziness.  When a test passes...the tester or Test Engineer is done.  However, if the test fails, he will be given many dev drops to try out, which takes time.  Far easier to just "pass" a test than to fail it.  The solution to this is of course to reward finding bugs and defects.

5.  Development Driven Testing is the flip-side to Test Driven Development and just as important
This is perhaps a new concept and might need explaining.  In TDD (Test Driven Development), the tests are written before the actual feature is implemented.  In Development Driven Testing, the code (or at least its behavior) should be understood before the test is written.  While simply validating against a spec is important, if that is all you go by, you will not catch any exceptions.  Specs are not code, and are thus by nature, informal.  Only code is "written in stone" so to speak and thus provable or testable.  You can't "test" a spec (at least not directly).  Without understanding the code, you won't be able to find weak points, like invalid inputs, invalid access to globals or shared data structures between threads.

Also, if you don't understand how it works, how can you verify it?  Some tests are simple, but some can be very complex.  For example, the validity of a result might rely upon more than a single return value (for example, a C/C++ function might have a return value, but it might also store data in some pointer or reference which must also be examined).  Programs that update a database might insert one correct record, but it could insert 2.  Checking for this requires an in-depth knowledge that simply looking at a return value can not provide.

6.  Test programs must follow a protocol
What I mean by this is that if your Test Engineers, Automation Engineers and SDET's all write test programs in wildly different ways, the company will pay for it.  For example, does your test program simply use command line arguments?  Is it a GUI, and thus can't easily be automated?  Does your test program read a configuration file, and if so, is it in XML, YAML, or a plain old INI style file?  Test Programs must likewise report success and failure in a standardized way.  Without standardizing on a way to find, install and run a script, your Test organization will be in pure chaos.


7.  Test programs must be versioned
If your test programs are not versioned, how will you ever be able to do regression tests?  Or what if a OEM or customer wants you to reproduce an issue they are seeing with older software?  Test programs are software, and fall under the same software engineering principles as the actual code.  Furthermore, versioning should not be an afterthought.  Many headaches can be caused by poor forethought about how to version a product.  How will you deal with "forks" of code?  What about customer specific versions?  How do you specify release versions versus debug versions?

8.  Test programs must be code reviewed
Most enlightened companies understand the benefit of code reviews.  Having code reviews catches bugs early, and the earlier you catch them, the better off you are.  Also, it helps familiarize all the developers with everyone elses work.  And finally, I have noticed that it tends to help evolve a "group style".  All programmers have their own style, but sometimes they are so widely divergent that it makes reading code harder (and that's why having Coding Guidelines is helpful, though I don't consider it mandatory).

9.  Test Engineers shouldn't throw failures over the wall to developers without a first triage
Unfortunately, there seems to be no common standard for the difference between a QA Engineer, a Test Engineer, or a Software Engineer in Test.  However, most companies tend to have a black box test group vs. a white box test group.  For the pure black box group, they are in effect an internal customer.  The internal test group however should do both black box integration, functional testing AND white box dev testing.  When a defect occurs, at the very least, a Test Engineer should ensure that it wasn't a low-level issue (bad hard drive, intermittent network, invalid configuration or environment variables, etc), or their own script that caused the problem.  Better yet, they should dig deeper into the code.  This will help by giving the defect to the correct development team (for example, at my work, it might be a driver, controller firmware, or expander firmware problem).  

Unfortunately, there are people who believe that all the Test department has to do is check test cases off of a test plan.  Debugging an issue takes time and thus in their opinion, is not the problem for the Test Engineer.  For white box (internal) test groups, this is a waste.  I posit that you can't truly know how to test something unless you know how it is supposed to work.  Black box testing alone is not sufficient, because they will never see bad or wasteful code paths, nor will they have deeper insight into how to test what was coded.


10.  Test programs must be repeatable
Normally, this is a requirement for automation, but it should apply to ad-hoc testing too.  Doing an ad-hoc where you can't remember the steps or parameters you passed into a program or function are kind of useless.  Ideally, ad-hoc tests become regular Test Cases, and a program should be written to cover it.  If you design your testing framework with repeatability in mind, you are halfway there.  An even better solution is a kind of macro recorder.  This is where using a language with a shell (REPL) is awesome.  If you could record the commands issued from the shell, even ad-hoc testing can become automatable.

11.  Don't get hung up on writing tests before code
Just as many Test Engineers don't truly understand software engineering, many software engineers don't understand testing.  One area that I still struggle with is writing your tests before your code.  I understand the reasoning:  if you write your unit tests first, you have effectively created a formal requirements and specification, plus, how do you know if what you build works?  But this presupposes that you absolutely know what exactly it is that you are designing and building.  In my experience, software is an exploratory affair.  How can you test an experiment?  In science, you perform an experiment and then try to validate it with a theory.  I don't see some types of software design as being too different.

That being said, when applicable by all means do TDD.  It can help you better design your interface, because unless you write a test you may not even realize you need to expose something to validate the result.  It also guarantees that you will have unit tests at least to some degree.   This is better than the "I'll just write unit tests when I have time", because it usually becomes very unlikely that time will become available later.


12.  Don't treat Test Engineers like Testers
Not to denigrate testers, but if you use your Test Engineers simply to execute Test Cases and don't give them the time to debug and investigate the issue, or the time to write a script to automate their test cases, you may as well have hired a tester for half the salary or less.  Some Test Engineers like that though, and you need to discover which Test Engineers just like to manually execute tests, and which ones prefer to automate test cases, write test tools, or dig into the code to figure out what's going on.

No comments:

Post a Comment