Powered by Squarespace

A Tester Should Know Better

So, I have cancer. It's a rare blood cancer called Multiple Myeloma. I was diagnosed back in December and quickly started induction chemo. Before I get too far into this, things are going well. The treatments have beaten down the cancer and I'm feeling OK.

Throughout the treatment process I've had lots of tests done: bone surveys, MRIs, PET scans, bone marrow biopsies, blood work, and more. MRIs, X-Rays and PET scans are all very interpretive in nature. There are numbers and measurements (such as the size of a tumor), but they are visual in nature and not as easy to get a handle on and may even require special software to view.

So along the way, well I made a couple of mistakes that I shouldn't have as a tester.

Mistake Number One: I Didn't Confirm What Was Important to the Stakeholders

Blood work is all about the numbers. I can't speak to the process for any other cancer, but treating and managing Multiple Myeloma (MM) is a lot about numbers. Lots of numbers. And going into things, I had no real idea what numbers mattered. Well, all the numbers matter, but only certain numbers matter when it comes to the cancer itself. Others are secondary, but related. And then more just are there because they help identify other problems that might exist in the system (also known as my body).

Now being a tester and a former QA/metrics guy, I started to explore the numbers to figure out what they mean. And when I started, I had no idea what the meaningful numbers were. On the typical week, I had enough blood drawn to generate a page full of results. And some weeks, there was a bunch more. Early on I was tempted to, but decided not to chart how many vials of blood they took (the high in one sitting was 12).

Unfortunately, I didn't press the doctors (SMEs) from the start on what the numbers all meant. I got clues early on about some of them, but I didn't seek clarification. Some of them I knew from a layperson's perspective (white blood cell count, red cell count, platelets were all names I recognized). After a while I picked up some more, such as they were interested in my Absolute Neutrophils (which I learned are essential to fighting off bacterial and fungal infections). It turned out that this was a really important number to know about, as I was Neutropenic in late December when I suffered from Tumor Lysis and needed to be extra careful.

Over time I learned more. From MM podcasts and MM forums came the all important "M-Spike". I realized, I didn't know where or what mine was. When I was in the office, it was mentioned as important for getting Myeloma under control (it should be zero), but we didn't confirm where to find it in my labs. Eventually some internet research paid off and I knew more about what the M-Spike was - an unusual concentration of monoclonal proteins based off of a specific source - then I was able to examine my various test results and find the value. The process of identifying the M-Spike involves interpretation by a pathologist, so it isn't automatically collected and inserted into a numerical field. Instead it is part of a longer text field that explains the data. This also meant that MyChart by Epic Systems can't treat it like numeric data and generate graphs of it changing (more on that later).

On a completely unrelated note, you can be come a Non-Secretor. This means that you have Myeloma and the cancer is growing inside you, but the version you have has decided to stop creating the monoclonal proteins that make it noticeable. Then blood work isn't as useful and you need more PET scans.

There are other numbers that I found out were directly related to Myeloma and others were indirectly significant. Directly related is a thing called Free Light Chains and the Free Light Chain Ratio. These are building blocks of antibodies that get out of whack due to the bad Plasma cells created by Myeloma. There is a range that these numbers should be in and there is a ratio between the two that is normal. You pretty much want your M-Spike gone AND your kappa and lambda light chains in balance. If I was going to play with analogies, M-Spike and Light Chain Ratio are a combination of direct bug reports and related issues in your system, you want them to be zero or at least under control.

Those indirectly related numbers. Well those are your normal blood cells, neutrophils, uric acid, LDH, albumin, calcium and other blood chemicals and cells. The normal blood cells get suppressed by the overgrowth of Myeloma cells. This leads to anemia and immune system weakness. Other blood chemicals can get out of balance due to cells being killed by the treatment. The doctors monitor all of these numbers so they can respond with treatments to correct when things get out of alignment. My weak analogy here is that the indirect numbers are a bit like the performance metrics (memory, cpu, disk access) for the system including your software.

Now I'm not the tester in this situation, but I'm a key stakeholder along with my treatment team. How often is it that a key stakeholder doesn't really know what is going to deliver value? Well, more often than what you would guess. Thinking like a tester though, what I should have done is taken more time to ask what the numbers meant and what we wanted to see in the long run.

Mistake Number Two: Taking a Single Sample as an Absolute Truth

As I learned more about the numbers, I started asking about more of the tests. Eventually, I learned more about the results of my Bone Marrow Biopsy. The initial one found a lot of Myeloma. Additionally, they found that Myeloma cells were free floating in the blood, which isn't normal. Apparently they were pushed out into the blood from the very full marrow.

At the conclusion of induction chemo (treatment to knock down the Myeloma and prepare me for a stem cell transplant), we did another biopsy. The second biopsy was done in the same hip as the first time. A few days later, When I got the call from my nurse practitioner and she said that they found no sign of Myeloma in my bone marrow. Well, I got excited, and I told my family about it, and everyone was very happy for me.

Well, as a tester, I should have remembered that this was one data point. Before you freak out, it's not bad news. Just not as good as what I initially latched on to. A couple weeks later I had to go in for more tests and that is when I realized that my M-Spike was not zero, but 0.35. I started at 4.06, so that is still a huge drop. In Myeloma terminology it's called a Very Good Partial Response and its the best that some patients get.

Later, a second PET scan showed no active clusters of Myeloma in my bones. This being another very positive result. Just like the clean bone marrow biopsy, and a low M-Spike, this points towards a system that has been brought under control. Taken by itself, it would still be insufficient, but in combination with the blood work, skeletal survey and biopsy, it does indicate a strong success. So I've still got some of those bad cells out there, just a lot less an not clustered in big groups. But that's OK.

But lesson learned. As a tester I should know better than to take one good result as gospel. Think about it. A single sample in one location can miss an important find somewhere else. This is why we don't rely on a single test or a single approach in software testing. We know that there could be bugs hiding next to the spot that just worked for us. And we also always know that even with our best efforts, we are not going to find every bug in the software. We need enough testing and enough data to establish the risk in releasing.

Not Mistakes, But Applications of the Tester Mindset When You Have a Medical Condition

So a few thoughts I've had about things that a tester mindset can do for you when you have a medical condition.

Don't Be Afraid to Question Things

Only one person has a single-minded focus on your case, and that is you. The doctors and nurses are treating lots of people. They are professionals and should be doing all of the right things. Sometimes though, things happen. If someone is going to give you a treatment you don't think sounds right, ask about it. Be assertive. It never happened to me that I was going to get the wrong drug, but people on the forums have.

Remember, as a tester, it is your job to notify people of inconsistencies and risks. The stakes just aren't usually so high or so personal.


Just as a tester needs to communicate with their team, a patient needs to communicate with their doctor. The blood work, scans and X-Rays can only tell them so much. You as the patient need to communicate how you feel and how the treatments are affecting you. Quality of life is important and sometimes changes are needed.

Leverage the Tools Available To You

I have never been given a testing tool that did what I wanted out of the box. And the MyChart system from Epic, while pretty decent, is no exception. I've created my own spreadsheets to track the numbers, including the all important M-Spike. I've hooked up this crazy combination of a symptoms recorder in Workflow triggered by Launch Center Pro and that uses Launch Center Pro to send data. To Evernote and to a Google spreadsheet via IFTTT. So I've built my own dashboard and metrics tools to track things the way they make sense to me.

Point there, is taking care of testing or tracking your health, leverage the tools you like and are comfortable with. You don't have to take stock options and limit yourself there.

In Conclusion

Learning you have cancer and quickly starting cancer treatment is not the same thing as testing insurance software or the next big app. In those initial days, I was focused on the prognosis, which thankfully was a lot better than the median survival rates that are published (based on older data). And then absorbing the fact that I was starting chemo in a week, and then learning to adjust to the ups and downs and of chemotherapy. Unless you are in the medical field, or have had to help a friend or a loved one though this, it's probably all new to you. It certainly was new to me. And it's not like we didn't ask a lot of questions. We asked a ton, but how we were going to measure and track the treatment wasn't one of them.

So it took a little bit, but now I have a better understanding of what is going on. I understand more about the myriad of numbers that come from the blood draws. As this is a new domain to me, I still have tons to learn. I've got a good team working on things, and I expect to have plenty of time to learn more.

And some remaining thoughts. First off, I don't intend to make this into a cancer journey blog. I deeply respect the people who have public blogs for that purpose, but that is not for me and this is not the place. We might see some more thoughts about the intersection of testing and my experiences, but I doubt there will be many. Second, I am doing well. It hasn't been easy, but it could be a lot worse. Finally, I am immensely grateful for the support my family, friends, coworkers, neighbors, and employers have given me and continue to give me. I'm a lucky man.

i have turned off comments on this post. I've had some cross-posting spam in the past.


Lessons Learned with Weld and Cucumber

I’ve recently started a conversion of Junit-style Selenium tests to using Cucumber. The goal of this is not currently BDD, but more about writing example driven tests that will focus on the significant behaviors in the system. But I’ll save my thoughts on the test authoring style for another day. Today I want to talk about my experiences converting the existing framework from being heavily singleton based to using Dependency Injection using Weld.

The Pragmatic Programmers just recently released The Cucumber for Java Book which I have been following since the beta book was announced. It’s a very useful book if you are a tester/programmer in the Java world and are interested in using Cucumber. As an aside, I do in principle agree with Adam in his recent post that you don’t have to write your tests in the same language as your developersd, but in my case, I have a large number of Page Objects that I can reuse, and we use QueryDSL to generate database objects for interacting with the database of the system under test. So if you are like me, and you feel Java is the best choice to write your tests, I highly recommend this book.

I’ve been aware of Dependency Injection (DI) for quite some time, but have not really used any frameworks for it in any of my previous testing automation. But I was very intrigued by the chapter in the book that covered using various DI frameworks in Cucumber and that Cucumber is essentially DI aware. Having no personal opinion on on the available frameworks, I chose Weld, as that is what our developers use. Again, is is not required to use the same framework as your developers, if you are not directly using their code. In my case, I chose Weld because they do use it and are very willing to help with questions. Given the choice between learning on my own or leveraging their knowledge, I chose to leverage.

Challenge Number One: Becoming Pojos

Weld requires that the objects you are using be Pojos, or Plain Old Java Objects. This means that all classes should have a constructor, with no arguments, or a constructor that takes a single argument where that argument is another injectable object. This meant architectural changes to the WebDriver wrapper, the logger, the database objects and the page objects. All of these needed to be changed to make Weld happy with them.

In principle this wasn’t too hard to do, but it took some discovery to find the various changes I needed. And even then, it required some additional trial and error (and help from my developer) to identify classes that were created using public fields rather than setters and getters. Weld is rather opinionated about class design, and that’s OK. I just had to get used to identifying where our code didn’t match what it wanted.

Typical changes I made included:

  • Making sure that instead of executing code in constructors, used @PostConstuct annotated methods. Weld injects objects after the constructor is called, so you wind up with NullPointerExceptions if you use injected fields in the constructor.
  • Converting public fields to private ones with setters and getters
  • Changing the page objects to skip validation steps that used to happen in the constructors, as the page objects are injected now. This means that the pages might not be active/visible when the constructor (and @PostConstruct methods) are called.
  • Converting some utility classes that were written using static methods to be injectable Pojos instead, as they needed other classes injected into them (and figuring a way around the circular dependency I created at one point).

These changes turned out to be reasonable, it just took a little bit of education and effort to think through. I’m liking the result. I haven’t had much issue with the code once I figured my way around the places where it didn’t match what Weld expected.

Challenge Number Two: Dealing With a Little Classpath Hell

Back in the old days of Windows development, we all had to deal with the special joy known as DLL Hell. Back then you had C and C++ based libraries that might have the same name, but have different entry points and software would crash when you called a version of a method that was different or changed in the version of the DLL found by the application.

Well the Java version is Classpath Hell. Where you have different versions of Jars being used by the various Jars you are dependent on. Java is better about this than the DLLs of old, but you can still get caught. I was initially using Weld-SE 1.1.0 as the version of Weld in my pom file. But when I tried to run, I would get a MethodNotFoundException because the version of google collections I was getting didn’t have a method that an updated version of Selenium wanted. I went through all of the dependency tree and tried doing exclusions of guava from other jars with no success. Eventually my developer (who was extremely helpful every time I needed him) dug all the way into the Weld jar and we discovered that they had actually incorporated the google collections code into their code and didn’t rename the packages. The result was that their version of the code was always being found. The fix was a newer version of Weld, but it was a bunch of headaches to get there.

Challenge Number Three: Learning the Quirks of Cucumber Lifecycle

Cucumber is very well thought out. I really appreciate the separation of concerns between Feature doc, Step Definitions, Helper Code and the WebDriver. It makes for a great structure. It also enforces some rules and behaviors that were different than I expected. And some of these quirks had a relationship to how Cucumber worked with Dependency Injection. That isn’t bad, but it still requires learning and then adapting.

A couple of key things I learned:

  • @ApplicationScoped classes are potentially recreated on for each test. You may still need a Singleton to ensure that you get a single browserDriver created for the test run.
  • On a related note, I created SharedTestData class to carry data between step definition files. This worked well enough, until I tried to use it in a @Before and @After hooks. The SharedTestData was reset when the test ran, so values stored in the @Before didn’t make it to the @After. Created a separate SharedHooksData to carry data between hook files.

Still Learning

There are still challenges waiting for me out there. I’ve just really started working through converting all of the page objects and creating the new tests. I’ve got a good start going and I feel like I’ve broken through the initial barriers. But I know that there will still be some behaviors I don’t expect in Cucumber and with Weld. I’m better able to handle them now than when I started. It doesn’t mean there won’t be some cursing and some frustration, but I’m up for the challenge. I’ll come back and document the other challenges later.


The Expert

At One Point, We’ve All Been the “Expert”

There’s a video making the rounds recently. It’s a humorous take on the experience an Engineer faces being the “Expert” in a sales meeting. Here’s the video:

It’s a familiar scene, and one that many of us have experienced first hand: the customer/client asking for something, and the technical person in the room is designated the “expert.” Joining the expert is a sales person, account rep, project manager or technical manager, who has complete confidence in the expert and assures the customer the expert can do it. Ultimately, the “expert” is sitting there trying to figure out how the heck to explain the reality to the customer that transparent ink doesn’t exist or that the customer can’t violate geometry.

My own version of this involved me sitting in a conference room with a director, a vice president, the author of the original framework, and my recently appointed boss. I was the lead on an effort to port our WinRunner test framework to TestComplete, and the “expert” in the room. It was a successful port (though someday I need to share why it wasn’t a good framework). We were meeting with these executives to present our progress and plans for the future.

The author of the original framework and I explained to the executives how the new framework was ready and that we could start converting tests to run with TestComplete. Then we suggested that, in two years, we would have the whole team humming along and writing tests like nobody’s business. The reaction was immediate. “That’s unacceptable; it all has to be done in 6 months.” I am pretty sure my mouth hung open. My boss, who has zero background in test automation, or testing explained “Dan doesn’t mean two years. Of course we can do it in six months.”

So, much like the woman asking for transparent ink, these executives were asking for something I knew was impossible. The framework, while a port, was still going to require a manual process to convert the tests, because the new tool was going to have problems we just hadn’t run into yet. I also knew we didn’t have a team of automators, but of manual testers who could be taught to automate. Finally, I was certain what they meant by “all” was that all of our manual tests needed to be automated in that time. My boss was agreeing with them without understanding it wasn’t going to happen. He later explained that I had put my foot in my mouth. I was mortified.

I’m now older and more experienced than I was then. I’m still not the expert I was billed to be or I billed myself as back then. So in retrospect, I realize their reaction should have been expected and understood. While there are exceptions, executives don’t realize how complicated test automation is. It’s not their job to understand test automation. They know they have only so much time, energy and money to throw at any problem. To hear it’s going to take two years until the team is writing tests on a regular basis and keeping up with the changes in the system, well that’s not what they pay “experts” for.

I look back on my time in the room as “the expert” and I understand now that I focused too much on role of the executives. I wasn’t prepared for their objections. What I would do now, is talk about what can be possible in the short term. I would suggest that in six months we could have a logical number of tests converted (backed with a certain amount of math and realism). It very well may not have been enough, but it would have been easier to defend.

My two year prediction essentially proved to be accurate; but two years is an eternity in business. Neither of the executives were in their same roles two years later. Both had moved up into new or expanded roles. They probably don’t even remember the meeting, let alone me. I left the company within a few months of the incident, taking on a slightly different “expert” role for a consulting firm. That meeting, along with a few other events, made it clear to me that the place was not going in a direction i was confident in.

It’s hard to be in the room with someone who is paying your salary, or about to sign a big deal with your company, and to tell them no. Saying no can be a career affecting thing to do. Someone is probably waiting in the wings to say “yes” and get the job or the deal. And “no” isn’t necessarily the right answer. But when you are “the expert,” you had best be prepared to expertly identify and defuse unrealistic expectations when they come. “No” is not what executives want to hear. However, there can be options available that are not “Yes.” You need to learn to communicate your concerns in a way which allows the novices in the room to be open to change.

Parting Thoughts

A final comment about the sales person and/or manager in the room. Their role in this situation is critical. The executives in my case, and the customers in the video, don’t seem to know that what they were asking for couldn’t be done. The manager who is presenting you as the “expert” had best be prepared to support you that way. Blindly agreeing with the customer because they want the deal or to protect their job, and ignoring the counsel of the person they claim actually knows how to draw lines or write automated tests, makes them the weakest link in the chain. If they really think you are the expert, then they should do their best to understand your concerns or reservations before agreeing to anything. And if they can’t or won’t do that, you may want to be someone else’s “expert”.


Modem Checker Script

A little while back I described how my internet connection problems led me to figuring out how to track when my cable modem rebooted. The post was pretty light on read details. This post exists to flesh out some of the details. The code you will see here is not the flashiest, but it works to do the basic task of recording the modem logs in a database and alerting me when a reboot happens.

First Objective: Parse the Log Page

Motorola SB6141 cable modems include a web server that shares diagnostic pages. The list includes a page for for checking overall modem status. Another page shows the signal strength. And yet another lists the open source software that goes into the device. And then, you will find the log page.

The log page is an interesting thing. Tech support at Time Warner wanted nothing to do with it, and neither did Motorola. Apparently, the people that would use the page don’t talk to customers. Below is the log page from my modem:

Modem Log Page
Modem Log Page

Each row contains the time, severity, code and diagnostic message for events that the modem detects. When I looked at this page with on my modem, it was filling up every day with a variety of diagnostic events, including the reboot events.

All-in-all, it is a simple HTML table. Using open-uri and nokogiri I wrote some code that could fetch the page and extract the the rows from the table.

doc = Nokogiri::HTML(open(url))
doc.xpath("//table/tbody/tr[not(th)]").reverse_each do |row|  

I know XPath is not the always nicest way to do things, but with the lack of styling in the table and the simplicity of the page, it seemed a reliable choice.

Logging the data

My script logs each row from the table to a SLQite3 database. Note that I iterate over the rows in reverse order, as I want to pull them in from the bottom to the top. This lets me log the entries to the database in order. I can compare the time on each row to the last logged time in the database, to make sure I don’t double log.

I assume the modem gets the time from the internet, as every time it reboots, you see the date set to the start of the unix epoch (1970), then the modem updates to the current date and time. The modem fixes it’s internal clock before it is officially running again, so you can trust the first row with a current date.

if newer?(get_highest_timestamp(db), timestamp)
   db.execute("insert into log (timestamp, level, code, message) values(?,?,?,?); ", timestamp, level, code, message )
    # push happens here
   # do nothing

Each every entry, except those stamped 1970, gets logged to the database. That information supplies the data in the Sinatra application I wrote before I added push notifications.[1]

The Push

When I first implemented the logger, I could check the Sinatra app to see if the network had a problem when I wasn’t around to see it. That was cool and is still useful for analyzing trends, but we live in the iPhone age. Push notifications provide near instant updates wherever you are.

I found Pushover after hearing it mentioned on the [Systematic][systematic] podcast. It’s push notification service with an API you can use in your own apps or scripts.

Fortunately, it is not necessary to key off the date change; a specific code for a reboot event and the script just looks for that. The followng code triggers a push notification when the script detects a “Z00.0” code (the reboot code) while saving the log page table contents to the SQLite3 database.

if code == 'Z00.0'
       pusher.push_notification("The modem rebooted at #{timestamp}.", "Modem reboot alert")

The Pusher

So, what is the pusher you see there? I’m using the Rusover gem to communicate with Rushover makes the process of using Pushover a little easier. I’m sure I could have written my own code to call the api directly, but rushover does make it just a little simpler.

However, while rushover does make it easier, I prefer to wrap rushover in my own code. This allows me to make changes later (for example if it changes in a way I don’t like, or if I decide to change push provider). I created PushNotifier to ecapsulate the interactions with Rusover. The following code demonstrates how I send the push notification:

def push_notification_impl(message, title, current_time)
    unless is_during_quiet_period?(current_time)
        resp = @rushover_client.notify(@user_key, message, :priority => @priority, :title => title, :sound =>@sound)
        resp.ok? # => true =

You get the user_key from You also need to create an application key for each application or script that you want to send distinct notifications from. I’ve hidden the keys to my applications in the following example of the pushover configuration page:

Pushover Home Page
Pushover Home Page

Running the Script

I’ve got a spare Mac Mini set up in the basement to run the script. I bought the Mini off a friend a number of years ago. It’s not too fast, but it just keeps ticking; making it the perfect box for the job.

A little bit of research and experimentation enabled me to set up a launchd job to run the script every two minutes. Initially, I wrote it with an infinite loop, but when I thought about it, I realized there were far more things that could go wrong with that. Launchd ensures that the script will run when the computer reboots, something my original version wouldn’t do, as I was running it in a terminal.

And What Did I Learn?

I learned a lot.

First off, I learned how to do push notifications; a useful trick to have. It enabled me to create a second push notification for work, where I’ve been working the incoming bug queue. It isn’t the biggest pipe, so issues come in a couple times a day. Now I get an alert whenever the modem reboots itself.

Second, I learned how to create a LaunchDaemon on OS X. While not something I need to do a lot, it is one more skill that I didn’t have a month ago.

Finally, I learned that sometimes the journey is more important than the destination. Soon after I got the notifications working, I switched out the modem with a replacement from Motorola. Since that time, the modem has not had a random self-reboot. Besides, sometimes we don’t have the time or the bandwidth in our day jobs to explore things that interest us. And sometimes, the things we learn can help us later on the job.

I made the entire script available in a gist. Feel free to make it your own.

  1. In a future post, I will talk more about my love of Sinatra.  ↩


A Script Rises From The Reboot

I love to hack around at things. And while I’m not one to do too much hardware hacking, I will hack around a little bit of Ruby here and there.

I’ve been working from home for over a year now. It’s a great experience for me. I don’t mind being by myself during the day, and I have plenty of camaraderie with my coworkers through chat and online meetings.

But working from home requires a stable Internet connection. Stable is key. Nothing is worse than having your Internet connection drop out just before you give your report in the virtual stand-up. My connection was dropping out regularly, and it was looking like it was going to be a big problem.

So I did the first thing you can do, called the cable company. Three times they sent folks out. They adjusted cables. And they replaced splitter. And they told me I had a great connection. In other words, they were no help.

A long the way, I figured out how to check the logs on my Motorola SB6141 modem. I had purchased my own, as the Time Warner provided unit was a pain. So I started watching the logs to see what was happening. I found that it was routinely rebooting with T3 or T4 timeouts. I was convinced that these issues were my provider’s fault and I decided that I wanted to track this, maybe so I could request a refund or something.

So, opened up Sublime Text and started writing a script. I started out creating a simple scraper script to check the log page for updates. Using the nokogiri gem, I was able to write a very simple script that could parse the table. I then added some code to log that information to a sqlite3 database and dump the number of modem reboots to the command line. I set this script up to run in a continuous loop on an old Mac Mini I bought off my previous boss.

I was getting closer to understanding how bad the problem was. I now could watch for the number going up. I also got a little clue that the number was going up, as the database and the matching code were in my dropbox account and I got a notification on my main computer whenever the file changed and a new version was downloaded.

Still not enough, I turned to another bit of ruby magic to make it easier to see the reboot data I was gathering. I created a new script using sinatra to create a simple web app that would read data out of my sqlite3 database and show me the count. Still not enough, I found the GoogleVisualr gem and used that to generate a chart showing the reboots per day. Now I could watch for trends, though unfortunately I didn’t see anything I could correlate to the outside world.

Google Chart of the reboot history
Google Chart of the reboot history

With my little script running all the time, I could tell when the network went down, even when I wasn’t home. To my dismay, I looked at the chart and saw times where the modem had rebooted a lot. The high was 31 times in a single day. Thankfully that was not a work day.

Still not quite satisfied with my setup, I decided to take it to a new level. I heard about Pushover from some site and thought it sounded interesting. It’s a push notification app and web-service with an API that you could use. I also wanted to do some push notifications for work, so i bought the app and began experimenting. I soon found Rushover, a ruby gem for using Pushover. Then I was in business. I built my own little wrapper around rushover so I could manage how I used it and added it to my script. Along the way, I changed my script from an endless loop to run as a LaunchDaemon service every two minutes (some great tips on LaunchDaemon at Now I had a reboot friendly system for logging all reboots and notifying me within 2 minutes of a reboot.

So, now I had a logging and push notification system in place to keep me aware of how bad my connection was. The problem was, I still was rebooting multiple times a day.

So, I finally decided to call Motorola, since my modem was still less than a year old. And after a frustrating discussion with a tech (who got a earful from me at one point), a new modem was going to be sent to me. He assured me that he did not think it was the modem. And after going through the completely inconvenient shipping options (with me shipping first and go without a modem for 2 weeks), I agreed to pay $5 for an advance replacement.

A week later, after the modem arrived and I had sufficient time to swap out he units and call Time Warner to get the new modem authorized, I was up and running again. Along the way, I took the splitter out of the equation, since we stopped the TV part of cable. I haven’t rebooted since.

So all along it appears to have been the modem, or maybe the splitter. Either way, the core problem has been resolved. My connection is back to the normal quality and not dropping out all of the time.

Along the way though, I used the desire to get my Internet connection working into a practical excuse to improve my scripting skills by:

  • Learning nokogiri
  • Writing my first sqlite3 database code
  • Implementing push notifications that are under my control
  • Learning how to use launchd to run my modem checker script on a regular schedule
  • Learning how to generate google charts in my sinatra app

I had a lot of fun doing this. It increased my skills and eventually I solved the problem. In the coming days I will post a version of the scripts I’m using to my github account. They are fairly utilitarian, but maybe someone will get something out of them.