How to Enrich ScalaQuery with Nested Sessions and Transactions

Recently, we wrote an integration test in a Play Framework application using ScalaQuery which failed in a way that surprised us: we had misunderstood the library’s interface for database sessions and transactions.

It turns out that nested usage of #withSession / #withTransaction are not supported by ScalaQuery 0.10.0-M1 out of the box. Instead, a new session and transaction are created in each nested scope. This causes data write/read inconsistencies (due to separate sessions), and failures of transactional rollback (due to separate active transactions in the nested sessions).

Rather than maintain a fork of ScalaQuery, we used the Scala “enrich my library” pattern to add support for the nested usage in an external library. For examples and usage details, check out the scalaquery_nested github project.

For example:

database.withNestedSession {
  // add object to database here
  database.withNestedSession {
    // query same object back from database here
    // great! same connection used, so expected object is returned
  }
}

Puppet Camp Barcelona

I recently had the pleasure of being asked to speak at Puppet Camp Barcelona. I’d submitted a talk a few months ago about some of the problems my team was having with our uses of puppet, and how we’re adapting to change how we use puppet.

I was extremely pleased to be asked to present, and also extremely pleased that TIM Group was willing to fund my flights and give me the time to attend the conference.

I was pleased by how the presentation went, and I gained a whole bunch of ideas we hadn’t thought of from chatting to people afterwards. From the official writeup, at https://puppetlabs.com/blog/puppet-camp-barcelona-wrap-up/ I think the talk was generally well received.

I’m looking forward to finding the time to write up further details of how we use puppet at TIM Group, and what problems this solves for us.

Monitorama

This is a blog post that was written in 2013, but somehow was forgotten about. So here is a bit of history!

— Andrew Parker


Last month I got the chance to attend the Monitorama conference.

This was out and out the best conference I’ve visited so far this year for learning. The conference was organised as a day of lectures by notable people in the field, followed by a day of workshops (practical ‘follow the talk on your laptop’ style sessions) on various monitoring and visualisation tools in parallel with a day of ‘Hackathon’ (working on projects).

The talks I attended covered some topics I’m already very familiar with (E.g. logstash), and some topics which I’m much less familiar with (E.g. the R workshop). The level of technical detail was generally great – not overfacing if you were a beginner, but also with something to learn if you’re a seasoned user.

In the second day, in the morning, I teamed up with Jason from TIM Group, and we were a little selfish – working on scratching our own itch. In the afternoon we broke off and attended some of the workshops, which were interesting. I was extremely surprised (and pleased!) that our project won the 3rd prize for a group project at the conference – that was unexpected given that we’d gone off and solved our own reporting issue, rather than tackling a more generic monitoring problem which would have helped a larger subset of people! We also managed to build something functional for our needs, and get it deployed into production within the 5 hours we had to work on it.

I hope that readers will forgive me if I spend a few paragraphs telling you about what we built (and why!):

We used to deploy Foreman, which we used to view / search through puppet run reports. Unfortunately, with our recent upgrade to Puppet 3.0, our Foreman installation needed upgrading too.

The version of Foreman we were running was ancient, and it’s since gained a massive number of features – however the only feature we were using from it was the report browser. This was causing us to have mysql on our monitoring machines, just to support this application, and re-packaging the latest version to our internal standards proved to be a non-trivial exercise.

We’d basically just disabled it to go ahead with the puppet 3.0 upgrade, with a plan to experiment with (try writing a proof of concept) using our logstash/Elasticsearch solution for the data transport and storage. I was able to very quickly hack up a reporting plugin for puppet, based off of some earlier work I’d found on github, and I’d been playing with the angularjs framework on the plane on the way over.

So, after about 5 hours hacking, we had bolted together Norman (excuse the bad pun).

This is, of course, still a simple and barely functional prototype; however it’s useable enough that after a couple more hours work we have unit tests (and green builds in Travis) and so that it was possible to deploy as an Elasticsearch plugin. There is still missing functionality from what we replaced in Foreman, however none of it is truely essential and we should be able to add that gradually as we have time.

Devopsdays London

This is a blog post that was written in 2013, but somehow was forgotten about. So here is a bit of history!

— Andrew Parker


Most of our Infrastructure team and a couple of developers we had seconded to the team all attended the Devopsdays London conference a couple of weeks ago.

There are a load of reviews/notes about the conference online already, however we also made a set.

I think everyone attending found the conference valuable, although for varying reasons (depending upon which sessions they had attended). Personally I found that the 2nd day was more valuable, with better talks and more interesting openspace sessions (that I attended). As I had expected (from my previous attendance at Devopsdays New York), I found the most value in networking and comparing the state of the art with what others are doing in automation / monitoring / etc.

I was very pleased that TIM Group is actually among the leading companies to have implemented devops practices. I’m well aware that what we’re doing is a long way away from perfect (as I deal with it 5 days a week), however it’s refreshing to find out that our practices are among the leaders, and that the issues we’re currently struggling with are relevant to many other people and teams.

I particularly enjoyed the discussion in the openspaces part of the conference about estimating and planning Infrastructure and Operations projects – at the time we were at the end of a large project, in which we’d tried a new planning process for the first time (and we had a number of reservations). The thoughts and ideas from the group helped us to shape our thinking about the problems we were trying to solve (both within the team, and by broadcasting progress information to the wider company).

Afterwards (in the last week) we have taken the time to step back and re-engineer our planning and estimation process. We’ve subsequently set off work on another couple of projects, with the modified planning and estimation process, and the initial feeling from the team is much more positive. Once we’ve completed the current projects and we’ve had a retrospective (and made more changes) I’ll be writing up the challenges that we’ve faced in estimating and how we’ve overcome them – as being able to deliver accurate and consistent estimates in the face of un-planned work (e.g. outages, hardware failures etc) is even more challenging than for operations projects than in an agile development organisation.