Making the Case for the Cutting Edge at LambdaConf 2015

Yesterday, May 23rd, I gave my first talk ever to a conference at Lambda Conf 2015. This was a surreal experience, to be honest, and I'm immensely thankful to John and everyone else on the Lambda Conf team for giving me the opportunity to talk about our experience at Elemica with choosing Scala and Clojure as our primary tech stack.

In in interest of transparency I'll note that I honestly left the talk feeling that it didn't go so well. I want to share this because I know a lot of people who have recently got into speaking too, and maybe your first experiences were awesome. But, if you felt like it was a bit rougher than you expected then you should know you're not alone. Even with the level of preparation I put into it - which is more than I put into any of the talks I'd given at user groups before - I think my nerves got me a little bit. After the talk I had a few people come up and give me some encouraging feedback. One of which was Mike Kelland of BoldRadius, to whom I owe a public and huge thanks. I was a lot less bummed after our conversation.

But, all said, I don't think flubbed anything. I said what I came there to say, and was heard. And that's a big deal, especially for someone who primarily prefers to communicate in writing given the choice. A few parting thoughts from this experience:

  • I'm probably going to give talks for the first time at user groups in the future. Something that I'm learning through the process of starting to speak publicly is that I don't quite get an idea of how things are going to play until I give a talk to a live audience. Mostly because how I deliver something and how the audience responds becomes this cascading feedback loop. Rehearsing to myself doesn't have that quality and just doesn't give me the same amount of information with which to revise the talk in the future. So, I'm probably going to make a point to give a talk to a local user group before delivering it to a conference.
     
  • Soft talks can be oddballs at programming conferences. LambdaConf was a very concrete conference - in the sense that most of the talks were about concrete ideas or concepts that you could sink your teeth into at a engineering level. My talk, in contrast, was focused more on soft concepts – talking about some of the human things around the choice to pursue Scala and Clojure. In the future I'm going to try and be more contentious of that as I'm sending in proposals.
     
  • I, once again, have a new respect for speaking. The conference context kind of upped the ante on my respect for people who get up in front of an audience and deliver a talk. I'm going to try and be more actively encouraging to people who do that in the future, and you should too. It takes a lot out of you. Not joking. I crashed at 9pm last night. My body tried to crash at 6:30. Oof.

It was an incredibly challenging, but worthwhile, experience. This won't be the last conference talk I give. Like everything else it's an iterative process and I'm my biggest critic. And if you're interested in what I was talking about — click here to check out my slides. They're also available on the Speaking page of this site.

Finally, I enjoyed attending and meeting a bunch of other folks who are also passionate about functional programming. I got to hear a lot of great talks and meet some top-notch quality people. I got exposed to the idea of a virtual filesystem, learned a bit of Haskell and Erlang, and got to experience the beautiful - if rainy - Boulder. Looking forward to coming back next year - maybe as a speaker again if I have something to speak on and John and the team will have me. ;)

Onward and upward!

Excluding Components from an Aggregate SBT Build

While working on upgrading Databinder Dispatch's liftjson module to use Lift 2.6 I ran into an interesting challenge that Nathan, the project's maintainer, hadn't found a good solution to yet. Specifically, here's the situation: Dispatch is built for a bunch of Scala versions. The liftjson module won't build on Scala 2.9.3 because we (the Lift Committers) never issued a build of Lift for 2.9.3.

First, a bit of background. Dispatch is actually a collection of a few different projects and uses sbt's aggregate feature to group them all together. This is nice because any command you issue to the aggregate project (named "dispatch-all" in this case) is executed on all the projects that make up the aggregate. This works great because you can do things like type "+publish" and have all versions of all modules published. Until you're in the very situation that I was in where one component of the aggregate won't build for certain versions.

So, I started wondering what sbt might have to offer me if I need to disable building a component of a larger aggregate only for specific versions of Scala. Lo and behold, sbt does have a skip property that allows you to bail on the compile phase. As with most things involving sbt this took a bit of beating my head against the problem before I came up with this Magical Incantation of Building™ that allows us to skip the compile phase only for Scala 2.9.3!

Overjoyed I decided to start tinkering around and realized pretty quickly that something was wrong.

It seems that while trying to build the other modules for 2.9.3, the liftjson project was not being exempted from the update phase where sbt tries to resolve all of the dependencies of the project. This was, of course, an issue since there was no 2.9.3 build to find. And, as I found out the hard way, you can't skip the update phase or sbt just bails on the entire task with an error. So what are we to do?

Hm.

After some finagling, I settled on this:

The above tells sbt that when you're trying to find a lift-json build, if you're looking for 2.9.3 just use the build for 2.9.1. Now, 2.9.3 and 2.9.1 are not binary compatible. You should never do this if you're actually going to use the library in question. But as you can see in the comment I made above the case statement, the entire purpose of this is to just make sbt happy. It will never actually get used in an runtime environment so we can completely ignore the fact that this probably wouldn't work.

And verily, I had a version of Dispatch's Lift-backed JSON module compiling and the tests would pass. (There was some other work around converting the tests to Mockito that happened, too, but that's for a different blog post.) Belated, I decided to do what every victory lap of sbt involves: publishLocal to see it in action.

Then tears followed.

Despite my instructions not to compile the module for 2.9.3 – sbt was still publishing the module for 2.9.3. Insert rage flipping of tables here. That said, this fixed proved to be much easier to come by than the first. We just need to disable the publishArtifact property when the Scala version is set to 2.9.3. So, we do that like so:

And with that we no longer get the publishing for 2.9.3, but it continues to publish as expected for all the other Scala versions that this module will support. So, all together now, here's what the combination of instructions to skip a build of something based on a particular version (in this case 2.9.3) looks like:

Hopefully this will save someone a lot of time in the future. :-) If you're interested you can view the entire PR on Dispatch where this took place!

Until next time folks.

Thinking About Tests

I've been thinking a lot about tests lately. The automated, software development kind, not the school kind. I've been thinking a lot about tests because I've realized that my thinking about them has been broken. Specifically, around how to test Lift web applications. Something that I've had to come to terms with is that my schooling and experience up until a few years ago did a horrible job of educating me regarding when tests are needed and why.

I remember being taught a lot of text book mantras when I was in school regarding development. Things such as, "testing will prove your software works" and "testing will help you know if you broke something else." For a long time, I regarded the former mantra with a bit of incredulity ("I test it when I'm writing it, why am I going to the effort of writing more code to do the thing I've already done?") and the latter with an air of hubris ("If I build my code correctly, the scope of any change will be limited and I'll catch problems myself.")

For what it's worth, I think the Test Driven Development evangelists got it wrong too. At least some of them did. Testing as a matter of doctrine without understanding the whys and wherefores will tend toward the software engineer's version of hoarding if done without care. Tests keep adding up and adding up, some may not be written properly, and changing any minor detail in the application causes an increasing number of tests to fail over time - at least some of which are failures that are a product of how a particular test is being run rather than what is being tested. I understand now that there are lots of good tools to help avoid that, but there were (and are) enough examples of people not using those tools that my general reaction to the practice was summed up in "Why?"

Though I haven't sipped of the TDD kool-aid, I do take testing a lot more seriously than I once did. So, what changed?

Well, at work we're engineering at a much larger scale than anything I've ever been a part of before. It's literally impossible for one person to have the entire scope of the system in their head at any given time. It doesn't all fit. As a result of this, it's very easy for someone on our team to make a change without knowing the downstream implications before they make the change. Work in that kind of environment without sufficient testing, intentionally or otherwise, and you're in for a world of hurt as a software engineer. All of a sudden your change to a line of XML is causing buttons to go missing in the running application and you have no idea why.

So, maybe the answer is to just write as many tests as you can and by doing so you'll win, right? If only it were so.

I've also learned, first with Anchor Tab and again later on, that if you choose to test incorrectly, then you're also in for a world of hurt. Take Anchor Tab as an example. I decided to use Selenium to test everything when I started adding tests to the codebase. I thought I was incredibly clever, because, after all, what I cared about was that the user could do what they wanted to do.

What I've found to be true as a result of that experience is that Selenium isn't really a great API to do the same task as a unit test. The inherent contract it imposes of acting like a user is simultaneously its biggest strength and the blade that you will cut yourself on if you try to bend that contract to your will in unintended ways. Selenium, and technologies like it, is fantastic and certainly has a role in a larger testing strategy, but it isn't the larger testing strategy – even for a web application. It doesn't replace unit or integration testing, as I thought, but supplements it in different ways.

As a result of these realizations and ponderings, I've realized my Selenium misadventures stemmed from what College Student Matt learned about testing. I would teach College Student Matt differently today.

Instead of teaching things like "tests prove your app works" and so on, I would do my best to drill into his head the idea that tests are tools for answering a question. They are not something that magically makes your code harder, better, faster, or stronger. A test is merely a piece of code that answers a question about your code and you should probably know what that question is before writing your test. Also, the question itself affects the kind of test you need to write and making the right decision on the type of test is important for minimizing future work on the test while simultaneously maximizing the usefulness of the test. At the end of the day, you're answering this question for someone else and and making that person's life as easy as possible should be a priority.

So, for example:

  • If you are writing a method that parses an <a> tag in HTML into a data structure, one question you'll want to ask is "Will my method behave sensibly for all kinds of input it could see?" and you'll want to write a unit test on that method, where the method itself is completely isolated from any other component of the system.
     
  • If you've finished writing and wiring up a series of components that all work together, the question you'll want to ask is "Is the entire series of components behaving sensibly today?" and you'll want to write an integration test on that series of components where everything is As Real As Possible™.
     
  • If you've finished building out a new series of screens in your web application, the question you'll want to ask is "Can a user successfully accomplish their goal in the web application?" and you'll want to write an acceptance test using a tool like Selenium or PhantomJS.

The unit tests and integration tests should try to think of all reasonable scenarios and ensure that things function correctly, the former focusing on an individual method and the latter on a series of components. (Do note I wrote all reasonable scenarios, not all possible scenarios.) Meanwhile, acceptance tests should focus on determining whether or not a user interacting with an application can accomplish a set of goals. What's the difference, you ask?

Well, I hold the opinion that acceptance tests should not overly concern themselves with a lot of error conditions. I came to that conclusion after writing a lot of Selenium tests that do test for a lot of error conditions. What I ended up with was a series of tests that took a long time to run, tended to be a bear to update when things changed, and gained a reputation on our team for being unreliable due to a lot of obscure interactions that I believe were the result of my distorting the intended Selenium contract.

Verily, things will slip through even if you have an excellent net of unit tests, integration tests, and acceptance tests. But here's the deal, even with 100% code coverage things will slip through. Sorry, but it's true. That one obnoxious customer is going to enter that one unicode character you didn't test for and your whole app is probably going to blow up because your particular OS-specific version of your database recognizes that as an escape sequence due to an obscure bug in character processing. Welcome to software!

The question for you is how tightly can you close that net without making you and everyone else on your team want to pull their hair out? For my part, the new guidelines that I'm going to be suggesting for adoption to our team at work will probably look something like this:

  1. Lift snippets, actors, etc get unit tested unless they're basic snippets. Where "basic" means an ID has been pulled from a URL, I've retrieved something from the database, and rendered its contents on a page. That removes about 99% of the admin screens we have for just doing things like listing a bunch of objects with links for edit and delete.
     
  2. JavaScript gets tested if it contains any business logic. If any JS is looking at data that originated from our database and evaluating what it "means" and acting on that, it needs unit testing at least. Perhaps integration testing in addition to / in place of the unit tests depending on the situation.
     
  3. The app has acceptance tests that are defined that represent goals from the point of view of a user. These tests are ideally run in some sort of headless mode locally and for every pull request we make for speed, and run in our entire browser matrix before release. We've gotten into the habit of making them more unit-test like than any of us enjoy working with and we have to run the browser to run them today, which is a bear for different reasons.

The goal, as always, is for tests to be useful without being frustrating to maintain. And I think these guidelines will strike that balance for us.

So, hopefully you've found my ramblings on testing enlightening. I hope you'll consider sharing this with your friends and letting me know what you think. I always love hearing good feedback on these posts.

Until next time, folks.