Powered by Squarespace

Lessons Learned with Weld and Cucumber

I’ve recently started a conversion of Junit-style Selenium tests to using Cucumber. The goal of this is not currently BDD, but more about writing example driven tests that will focus on the significant behaviors in the system. But I’ll save my thoughts on the test authoring style for another day. Today I want to talk about my experiences converting the existing framework from being heavily singleton based to using Dependency Injection using Weld.

The Pragmatic Programmers just recently released The Cucumber for Java Book which I have been following since the beta book was announced. It’s a very useful book if you are a tester/programmer in the Java world and are interested in using Cucumber. As an aside, I do in principle agree with Adam in his recent post that you don’t have to write your tests in the same language as your developersd, but in my case, I have a large number of Page Objects that I can reuse, and we use QueryDSL to generate database objects for interacting with the database of the system under test. So if you are like me, and you feel Java is the best choice to write your tests, I highly recommend this book.

I’ve been aware of Dependency Injection (DI) for quite some time, but have not really used any frameworks for it in any of my previous testing automation. But I was very intrigued by the chapter in the book that covered using various DI frameworks in Cucumber and that Cucumber is essentially DI aware. Having no personal opinion on on the available frameworks, I chose Weld, as that is what our developers use. Again, is is not required to use the same framework as your developers, if you are not directly using their code. In my case, I chose Weld because they do use it and are very willing to help with questions. Given the choice between learning on my own or leveraging their knowledge, I chose to leverage.

Challenge Number One: Becoming Pojos

Weld requires that the objects you are using be Pojos, or Plain Old Java Objects. This means that all classes should have a constructor, with no arguments, or a constructor that takes a single argument where that argument is another injectable object. This meant architectural changes to the WebDriver wrapper, the logger, the database objects and the page objects. All of these needed to be changed to make Weld happy with them.

In principle this wasn’t too hard to do, but it took some discovery to find the various changes I needed. And even then, it required some additional trial and error (and help from my developer) to identify classes that were created using public fields rather than setters and getters. Weld is rather opinionated about class design, and that’s OK. I just had to get used to identifying where our code didn’t match what it wanted.

Typical changes I made included:

  • Making sure that instead of executing code in constructors, used @PostConstuct annotated methods. Weld injects objects after the constructor is called, so you wind up with NullPointerExceptions if you use injected fields in the constructor.
  • Converting public fields to private ones with setters and getters
  • Changing the page objects to skip validation steps that used to happen in the constructors, as the page objects are injected now. This means that the pages might not be active/visible when the constructor (and @PostConstruct methods) are called.
  • Converting some utility classes that were written using static methods to be injectable Pojos instead, as they needed other classes injected into them (and figuring a way around the circular dependency I created at one point).

These changes turned out to be reasonable, it just took a little bit of education and effort to think through. I’m liking the result. I haven’t had much issue with the code once I figured my way around the places where it didn’t match what Weld expected.

Challenge Number Two: Dealing With a Little Classpath Hell

Back in the old days of Windows development, we all had to deal with the special joy known as DLL Hell. Back then you had C and C++ based libraries that might have the same name, but have different entry points and software would crash when you called a version of a method that was different or changed in the version of the DLL found by the application.

Well the Java version is Classpath Hell. Where you have different versions of Jars being used by the various Jars you are dependent on. Java is better about this than the DLLs of old, but you can still get caught. I was initially using Weld-SE 1.1.0 as the version of Weld in my pom file. But when I tried to run, I would get a MethodNotFoundException because the version of google collections I was getting didn’t have a method that an updated version of Selenium wanted. I went through all of the dependency tree and tried doing exclusions of guava from other jars with no success. Eventually my developer (who was extremely helpful every time I needed him) dug all the way into the Weld jar and we discovered that they had actually incorporated the google collections code into their code and didn’t rename the packages. The result was that their version of the code was always being found. The fix was a newer version of Weld, but it was a bunch of headaches to get there.

Challenge Number Three: Learning the Quirks of Cucumber Lifecycle

Cucumber is very well thought out. I really appreciate the separation of concerns between Feature doc, Step Definitions, Helper Code and the WebDriver. It makes for a great structure. It also enforces some rules and behaviors that were different than I expected. And some of these quirks had a relationship to how Cucumber worked with Dependency Injection. That isn’t bad, but it still requires learning and then adapting.

A couple of key things I learned:

  • @ApplicationScoped classes are potentially recreated on for each test. You may still need a Singleton to ensure that you get a single browserDriver created for the test run.
  • On a related note, I created SharedTestData class to carry data between step definition files. This worked well enough, until I tried to use it in a @Before and @After hooks. The SharedTestData was reset when the test ran, so values stored in the @Before didn’t make it to the @After. Created a separate SharedHooksData to carry data between hook files.

Still Learning

There are still challenges waiting for me out there. I’ve just really started working through converting all of the page objects and creating the new tests. I’ve got a good start going and I feel like I’ve broken through the initial barriers. But I know that there will still be some behaviors I don’t expect in Cucumber and with Weld. I’m better able to handle them now than when I started. It doesn’t mean there won’t be some cursing and some frustration, but I’m up for the challenge. I’ll come back and document the other challenges later.


The Expert

At One Point, We’ve All Been the “Expert”

There’s a video making the rounds recently. It’s a humorous take on the experience an Engineer faces being the “Expert” in a sales meeting. Here’s the video:

It’s a familiar scene, and one that many of us have experienced first hand: the customer/client asking for something, and the technical person in the room is designated the “expert.” Joining the expert is a sales person, account rep, project manager or technical manager, who has complete confidence in the expert and assures the customer the expert can do it. Ultimately, the “expert” is sitting there trying to figure out how the heck to explain the reality to the customer that transparent ink doesn’t exist or that the customer can’t violate geometry.

My own version of this involved me sitting in a conference room with a director, a vice president, the author of the original framework, and my recently appointed boss. I was the lead on an effort to port our WinRunner test framework to TestComplete, and the “expert” in the room. It was a successful port (though someday I need to share why it wasn’t a good framework). We were meeting with these executives to present our progress and plans for the future.

The author of the original framework and I explained to the executives how the new framework was ready and that we could start converting tests to run with TestComplete. Then we suggested that, in two years, we would have the whole team humming along and writing tests like nobody’s business. The reaction was immediate. “That’s unacceptable; it all has to be done in 6 months.” I am pretty sure my mouth hung open. My boss, who has zero background in test automation, or testing explained “Dan doesn’t mean two years. Of course we can do it in six months.”

So, much like the woman asking for transparent ink, these executives were asking for something I knew was impossible. The framework, while a port, was still going to require a manual process to convert the tests, because the new tool was going to have problems we just hadn’t run into yet. I also knew we didn’t have a team of automators, but of manual testers who could be taught to automate. Finally, I was certain what they meant by “all” was that all of our manual tests needed to be automated in that time. My boss was agreeing with them without understanding it wasn’t going to happen. He later explained that I had put my foot in my mouth. I was mortified.

I’m now older and more experienced than I was then. I’m still not the expert I was billed to be or I billed myself as back then. So in retrospect, I realize their reaction should have been expected and understood. While there are exceptions, executives don’t realize how complicated test automation is. It’s not their job to understand test automation. They know they have only so much time, energy and money to throw at any problem. To hear it’s going to take two years until the team is writing tests on a regular basis and keeping up with the changes in the system, well that’s not what they pay “experts” for.

I look back on my time in the room as “the expert” and I understand now that I focused too much on role of the executives. I wasn’t prepared for their objections. What I would do now, is talk about what can be possible in the short term. I would suggest that in six months we could have a logical number of tests converted (backed with a certain amount of math and realism). It very well may not have been enough, but it would have been easier to defend.

My two year prediction essentially proved to be accurate; but two years is an eternity in business. Neither of the executives were in their same roles two years later. Both had moved up into new or expanded roles. They probably don’t even remember the meeting, let alone me. I left the company within a few months of the incident, taking on a slightly different “expert” role for a consulting firm. That meeting, along with a few other events, made it clear to me that the place was not going in a direction i was confident in.

It’s hard to be in the room with someone who is paying your salary, or about to sign a big deal with your company, and to tell them no. Saying no can be a career affecting thing to do. Someone is probably waiting in the wings to say “yes” and get the job or the deal. And “no” isn’t necessarily the right answer. But when you are “the expert,” you had best be prepared to expertly identify and defuse unrealistic expectations when they come. “No” is not what executives want to hear. However, there can be options available that are not “Yes.” You need to learn to communicate your concerns in a way which allows the novices in the room to be open to change.

Parting Thoughts

A final comment about the sales person and/or manager in the room. Their role in this situation is critical. The executives in my case, and the customers in the video, don’t seem to know that what they were asking for couldn’t be done. The manager who is presenting you as the “expert” had best be prepared to support you that way. Blindly agreeing with the customer because they want the deal or to protect their job, and ignoring the counsel of the person they claim actually knows how to draw lines or write automated tests, makes them the weakest link in the chain. If they really think you are the expert, then they should do their best to understand your concerns or reservations before agreeing to anything. And if they can’t or won’t do that, you may want to be someone else’s “expert”.


Modem Checker Script

A little while back I described how my internet connection problems led me to figuring out how to track when my cable modem rebooted. The post was pretty light on read details. This post exists to flesh out some of the details. The code you will see here is not the flashiest, but it works to do the basic task of recording the modem logs in a database and alerting me when a reboot happens.

First Objective: Parse the Log Page

Motorola SB6141 cable modems include a web server that shares diagnostic pages. The list includes a page for for checking overall modem status. Another page shows the signal strength. And yet another lists the open source software that goes into the device. And then, you will find the log page.

The log page is an interesting thing. Tech support at Time Warner wanted nothing to do with it, and neither did Motorola. Apparently, the people that would use the page don’t talk to customers. Below is the log page from my modem:

Modem Log Page
Modem Log Page

Each row contains the time, severity, code and diagnostic message for events that the modem detects. When I looked at this page with on my modem, it was filling up every day with a variety of diagnostic events, including the reboot events.

All-in-all, it is a simple HTML table. Using open-uri and nokogiri I wrote some code that could fetch the page and extract the the rows from the table.

doc = Nokogiri::HTML(open(url))
doc.xpath("//table/tbody/tr[not(th)]").reverse_each do |row|  

I know XPath is not the always nicest way to do things, but with the lack of styling in the table and the simplicity of the page, it seemed a reliable choice.

Logging the data

My script logs each row from the table to a SLQite3 database. Note that I iterate over the rows in reverse order, as I want to pull them in from the bottom to the top. This lets me log the entries to the database in order. I can compare the time on each row to the last logged time in the database, to make sure I don’t double log.

I assume the modem gets the time from the internet, as every time it reboots, you see the date set to the start of the unix epoch (1970), then the modem updates to the current date and time. The modem fixes it’s internal clock before it is officially running again, so you can trust the first row with a current date.

if newer?(get_highest_timestamp(db), timestamp)
   db.execute("insert into log (timestamp, level, code, message) values(?,?,?,?); ", timestamp, level, code, message )
    # push happens here
   # do nothing

Each every entry, except those stamped 1970, gets logged to the database. That information supplies the data in the Sinatra application I wrote before I added push notifications.[1]

The Push

When I first implemented the logger, I could check the Sinatra app to see if the network had a problem when I wasn’t around to see it. That was cool and is still useful for analyzing trends, but we live in the iPhone age. Push notifications provide near instant updates wherever you are.

I found Pushover after hearing it mentioned on the [Systematic][systematic] podcast. It’s push notification service with an API you can use in your own apps or scripts.

Fortunately, it is not necessary to key off the date change; a specific code for a reboot event and the script just looks for that. The followng code triggers a push notification when the script detects a “Z00.0” code (the reboot code) while saving the log page table contents to the SQLite3 database.

if code == 'Z00.0'
       pusher.push_notification("The modem rebooted at #{timestamp}.", "Modem reboot alert")

The Pusher

So, what is the pusher you see there? I’m using the Rusover gem to communicate with Rushover makes the process of using Pushover a little easier. I’m sure I could have written my own code to call the api directly, but rushover does make it just a little simpler.

However, while rushover does make it easier, I prefer to wrap rushover in my own code. This allows me to make changes later (for example if it changes in a way I don’t like, or if I decide to change push provider). I created PushNotifier to ecapsulate the interactions with Rusover. The following code demonstrates how I send the push notification:

def push_notification_impl(message, title, current_time)
    unless is_during_quiet_period?(current_time)
        resp = @rushover_client.notify(@user_key, message, :priority => @priority, :title => title, :sound =>@sound)
        resp.ok? # => true =

You get the user_key from You also need to create an application key for each application or script that you want to send distinct notifications from. I’ve hidden the keys to my applications in the following example of the pushover configuration page:

Pushover Home Page
Pushover Home Page

Running the Script

I’ve got a spare Mac Mini set up in the basement to run the script. I bought the Mini off a friend a number of years ago. It’s not too fast, but it just keeps ticking; making it the perfect box for the job.

A little bit of research and experimentation enabled me to set up a launchd job to run the script every two minutes. Initially, I wrote it with an infinite loop, but when I thought about it, I realized there were far more things that could go wrong with that. Launchd ensures that the script will run when the computer reboots, something my original version wouldn’t do, as I was running it in a terminal.

And What Did I Learn?

I learned a lot.

First off, I learned how to do push notifications; a useful trick to have. It enabled me to create a second push notification for work, where I’ve been working the incoming bug queue. It isn’t the biggest pipe, so issues come in a couple times a day. Now I get an alert whenever the modem reboots itself.

Second, I learned how to create a LaunchDaemon on OS X. While not something I need to do a lot, it is one more skill that I didn’t have a month ago.

Finally, I learned that sometimes the journey is more important than the destination. Soon after I got the notifications working, I switched out the modem with a replacement from Motorola. Since that time, the modem has not had a random self-reboot. Besides, sometimes we don’t have the time or the bandwidth in our day jobs to explore things that interest us. And sometimes, the things we learn can help us later on the job.

I made the entire script available in a gist. Feel free to make it your own.

  1. In a future post, I will talk more about my love of Sinatra.  ↩


A Script Rises From The Reboot

I love to hack around at things. And while I’m not one to do too much hardware hacking, I will hack around a little bit of Ruby here and there.

I’ve been working from home for over a year now. It’s a great experience for me. I don’t mind being by myself during the day, and I have plenty of camaraderie with my coworkers through chat and online meetings.

But working from home requires a stable Internet connection. Stable is key. Nothing is worse than having your Internet connection drop out just before you give your report in the virtual stand-up. My connection was dropping out regularly, and it was looking like it was going to be a big problem.

So I did the first thing you can do, called the cable company. Three times they sent folks out. They adjusted cables. And they replaced splitter. And they told me I had a great connection. In other words, they were no help.

A long the way, I figured out how to check the logs on my Motorola SB6141 modem. I had purchased my own, as the Time Warner provided unit was a pain. So I started watching the logs to see what was happening. I found that it was routinely rebooting with T3 or T4 timeouts. I was convinced that these issues were my provider’s fault and I decided that I wanted to track this, maybe so I could request a refund or something.

So, opened up Sublime Text and started writing a script. I started out creating a simple scraper script to check the log page for updates. Using the nokogiri gem, I was able to write a very simple script that could parse the table. I then added some code to log that information to a sqlite3 database and dump the number of modem reboots to the command line. I set this script up to run in a continuous loop on an old Mac Mini I bought off my previous boss.

I was getting closer to understanding how bad the problem was. I now could watch for the number going up. I also got a little clue that the number was going up, as the database and the matching code were in my dropbox account and I got a notification on my main computer whenever the file changed and a new version was downloaded.

Still not enough, I turned to another bit of ruby magic to make it easier to see the reboot data I was gathering. I created a new script using sinatra to create a simple web app that would read data out of my sqlite3 database and show me the count. Still not enough, I found the GoogleVisualr gem and used that to generate a chart showing the reboots per day. Now I could watch for trends, though unfortunately I didn’t see anything I could correlate to the outside world.

Google Chart of the reboot history
Google Chart of the reboot history

With my little script running all the time, I could tell when the network went down, even when I wasn’t home. To my dismay, I looked at the chart and saw times where the modem had rebooted a lot. The high was 31 times in a single day. Thankfully that was not a work day.

Still not quite satisfied with my setup, I decided to take it to a new level. I heard about Pushover from some site and thought it sounded interesting. It’s a push notification app and web-service with an API that you could use. I also wanted to do some push notifications for work, so i bought the app and began experimenting. I soon found Rushover, a ruby gem for using Pushover. Then I was in business. I built my own little wrapper around rushover so I could manage how I used it and added it to my script. Along the way, I changed my script from an endless loop to run as a LaunchDaemon service every two minutes (some great tips on LaunchDaemon at Now I had a reboot friendly system for logging all reboots and notifying me within 2 minutes of a reboot.

So, now I had a logging and push notification system in place to keep me aware of how bad my connection was. The problem was, I still was rebooting multiple times a day.

So, I finally decided to call Motorola, since my modem was still less than a year old. And after a frustrating discussion with a tech (who got a earful from me at one point), a new modem was going to be sent to me. He assured me that he did not think it was the modem. And after going through the completely inconvenient shipping options (with me shipping first and go without a modem for 2 weeks), I agreed to pay $5 for an advance replacement.

A week later, after the modem arrived and I had sufficient time to swap out he units and call Time Warner to get the new modem authorized, I was up and running again. Along the way, I took the splitter out of the equation, since we stopped the TV part of cable. I haven’t rebooted since.

So all along it appears to have been the modem, or maybe the splitter. Either way, the core problem has been resolved. My connection is back to the normal quality and not dropping out all of the time.

Along the way though, I used the desire to get my Internet connection working into a practical excuse to improve my scripting skills by:

  • Learning nokogiri
  • Writing my first sqlite3 database code
  • Implementing push notifications that are under my control
  • Learning how to use launchd to run my modem checker script on a regular schedule
  • Learning how to generate google charts in my sinatra app

I had a lot of fun doing this. It increased my skills and eventually I solved the problem. In the coming days I will post a version of the scripts I’m using to my github account. They are fairly utilitarian, but maybe someone will get something out of them.


My Year as a Shut-In Tester

This week I celebrated my one year anniversary at my job. Last December I said goodbye to going into a traditional office, and hello to a small team of professionals that I only see face to face once a year. That’s right, I joined the world of the home-worker, the remote worker, the telecommuter. And thus I have spent a year as a shut-in.

OK. I’m kidding. I’m not a shut-in. It’s true that I don’t leave the house to go to work, but I do leave the the house. I just don’t have to do it every day.

Seriously though, home-working is a challenging and rewarding thing to do. It helps that I work with a great bunch of folks. It helps that the company has a strong concept of life-work balance. It helps that I consider myself an introvert and I don’t mind being by myself a lot.

I’m not an expert on working from home, but I’d like to share a few thoughts about my experiences:

  • Working from home means not having to commute. Not that my old commute was long, but I still don’t miss it.
  • Working from home does not mean working without pants. I get up, get the kids going, exercise, shower, shave and get dressed; just like if I was going to go somewhere. Of course the dress code at my house forces me to wear a lot of Woot Shirts.
  • It gives me the flexibility to get the kids on the bus and be there when they get home.
  • I can run out for groceries at lunch sometimes (see I leave the house) and either make dinner or have stuff in the house for my wife to cook.
  • Working from home requires discipline. No one is watching you to remind you to be on track. You have to be responsible.
  • I can listen to any music I want to, without headphones. Well, if the kids are home I try to keep it clean. I have very sensitive ears and have never found headphones I really liked.
  • Reliable internet is essential. It is very frustrating that my cable modem reboots 1–2 times a day. I have written a script to log it. I even have a chart (and it’s not pretty).
  • Working from home is not enough by itself. Your employer has to support it well and your co-workers need to understand it. Where I work, we’re all remote, so we all know what it is like.
  • Work/life balance is hard in today’s always connected world. Don’t let the fact you work from home let your work take over. I have the fortune of having a boss (one of the compay owners) that has snarked at me when he caught me reading email on a Sunday night and my direct boss is very concerned that folks take their comp time.
  • It’s a little scary taking a job with someone that you don’t meet face to face first. I think if I hadn’t known my boss through collaborating on FitNesse, I would have been too freaked out.

If you find the right opportunity, and you can deal with lots of time by yourself, there are a lot of good things about working from home. Do your research. Check out the company and be sure of what you are getting into.

Also learn more about home-working. There is a great podcast called Homework that has a lot of great advice for telecommuters and freelancers. When I decided to take this job, I started listening to their back episodes and have been a listener ever since. It really helped me prepare for some challenges and is full of great tips.

It’s been a great year. I haven’t even discussed the fact that I’ve transitioned from a technical, non-testing role that I enjoyed a lot, back into a tester in the trenches. I’ve enjoyed that a lot, as that is a part of my career that I hadn’t focused enough on in some time. I’ll talk more about that in another post, which I hope to get to soon.

And finally, a big “thank you” to my wife. When this opportunity came along, she fully supported me making the jump to a new way of working.