I have some detractors… It’s my fault, I drew them with my own detraction of a blog post I saw elsewhere… and they came to repay the favor I suppose. The detractors come in a varying degrees of an anti-automation philosophy.

Rather than talk theory or throw pseudo data around, I wanted to give a real life case study.  How Automation and BDD done right can save the day for QA.

Some back story:

I started out as a front end dev, many years ago at Warner Bros..  I switched to become a QA tester, when I worked at Warner Bros.  After that, I went into QA engineering at Yahoo and elsewhere.  Yahoo was a very technical company, as was eHarmony.  At eHarmony I learned a lot about service architecture, about code, deployments, automation, non SQL solutions and a variety of other things.

I started non-technical, and ended up writing my own code, building deployment strategies, creating automation frameworks, etc.  It’s been an interesting journey and I am not afraid to “roll up my sleeves and do the dirty work of manual testing.”

I know there’s these characters who run around saying they are automation QA and refuse to do any manual testing.  That’s not cool.  But at the same time, QA needs to have the focus to write code, and test code.

After I left eHarmony, I got a job at a company that had no QA team at all.  I took on the role of QA Lead.  During the interview, I was asked, “How would you approach a problem, where there is no QA?”

I answered that with, “I would treat it as an automation problem.  First I would quickly build out a automation framework, and then get as much of the code base captured into it, that I could handle quick turnarounds on regression.” 

That’s my honest answer, and it has greatly benefited the company as well as myself.

Automation Goals

I had the automation framework up and running by the end of Day 1. By the end of the first week, I had a local install of Jenkins running and working with the automation tests.  By the end of week 2, I had the entire sprint coverage automated.

Automation detractors tend to say that a focus on automation takes away from manual testing.  But it doesn’t have to. If done right, it should only enhance the manual testing and exploratory testing.  In fact, manual and exploratory testing should be done within the automation process itself.

Example:

When I started my most recent job, I looked at their QA situation.  Knowing little of their application, I started with this process:

  1. I got Cucumber up and running
  2. I went through the previous written tests from the Business Unit and met with them to get an idea of the application workflow.
  3. I translated their current sprint’s tests into Given / When / Thens that I would late put into Cucumber.  They had a classic step by step test plan (1. do this, 2. now do this. 3. do this… 4. you get this result.)  I converted all that into BDD. 
  4. Back in Cucumber, I pasted the Given / When / Then scenarios into the feature files.
  5. Then I looked at the UI I would be testing.  For each step of the G/W/T/ I would go through it in the UI.  I would manually test it (manually running the test plan itself), and then get ideas for new tests (exploratory testing.)  As I got new ideas, I added more G/W/T’s. 
  6. Finally, I would stich the Gherkin language elements (given/when/then) to the actual element id’s in the UI. 
  7. I wrote out sign off strategies and best practices

By the second week, I had:

  1. Built out the Automation Framework
  2. Had all the previous sprint work in automated tests
  3. Configured the tests to run via Jenkins
  4. Triggered Jenkins to run parallel tests in multiple browsers and began looking into future Grid solutions.
  5. Provided bug/defects into their process and gave input into developing out the processes they had in place.

This gave me the flexability to kick off an ad-hoc regression in all browsers.

Does that mean I’ll only rely on this automation in the future? Certainly not!  I continue to manually run through the site… a good automation eng. has to, in order to automate the stories.   The team comes up with new stories every two weeks. That’s more code to a) write given/when/then test plans b) manually test c) automate to cover future regression.  It can seem daunting if you think of it as separate processes, but the way I do it, it’s all one process.  This is all being done at the same time!

We have a lot of future goals, like moving Jenkins to a server and integrating the test runs with each dev commit.  But for now, the QA side greatly helps me, being the only QA representative in the company. 

If I were just doing manual testing, sure I could breeze through their sprints, just doing the testing in multiple browsers and spend my remaining time exploratory testing…   But where would that leave us later on?  What happens when it’s crunch time, and I really need help? I need to regress all our past sprint work, and then cover a ton of new tests turning over to QA late in the life cycle???

Regression is the bane of manual QA.  It becomes a chore, and it wears down the QA resources.  I’ve seen it create what I call “test blindness” in manual testers.  At my previous job, I saw testers hit the same test they’ve seen a dozen times, and they have to test each test in 5 browsers or more… and they either just cut corners, or just become blind to a obvious error. 

By adding an Automated UI regression we greatly increase the quality of the deployments.  Just as adding Unit Tests greatly increases code quality.

Approaching the Automation

Approaching automation should be with the same QA mindset of approaching manual testing.  You have a new feature (say a web form that captures data.)  You think “ok this should work by inputing data and hitting save…”  sure, but you think “what happens if I pass in French, special characters, symbols, Portuguese, or Korean? How does it handle White Space?”  These same exploratory questions are also asked and tested during the Automation Test Creation Time.

As these are captured, the tests can be run and re-run, freeing the QA person up to look into more tests for the Sprint, other compatibility issues, other exploratory tests, etc.

What Works

To do this effectively, a company needs to hire a QA lead who knows how to set up automation as well as give QA the time and resources necessary to accomplish this.

At one previous job, I was called onto emergencies almost every day.  It was so insane, that I couldn’t do my day job – let alone find any time for automation.  The heads of the company would say “Automation is our priority” till it wasn’t (which was every other week) and have me doing some manual testing of a p0 bug fix, or urgent requirement change. 

QA needs to have focus.  If you have to have a separate team for automation (in a highly political organization) then that’s a solution.  But where I’m at now, they give me respect and let me lead this process.  That’s what really has worked for me.

To Sum It All Up

You can’t rely on Automation to do everything.
You can’t rely on Manual Testers to catch it all.

For me, I found a bridged solution where doing one process, creates both Automation and Manual testing. 

Leave a Reply

Your email address will not be published. Required fields are marked *