Chris Jackson

I Don't Often Blog, But When I Do It's Here…

A Container Rollercoaster

Roller coasters.  They’re awesome right?  That exhilarating feeling that you’re out of control careering upside down into open space but constantly reassured by the steel and harness that’s the difference between thrill and death, I love them.

I’ve sat at Container Camp today and I’ve decided that containers are like riding a roller coaster, except nothing is keeping you in the car other than the tightness of your grip.  I wouldn’t ride a coaster with no safety harness and I won’t be running containers for serious work, here’s why…

I came to the Container Camp event to get exactly what I have received – a non-commercial representation of capability, pros, cons and honest opinion that strips out the hype of a market searching for “The Next Big Thing (TM)”.  But I was surprised at just how different reality is from expectation, I think a lot of this is to do with Docker’s pointless quest to call itself “production ready”.  It seems like they’re almost under pressure from the developer community to pass the milestone so that those developers can dismiss Ops concerns about its stability and limitations and force an agenda to use it everywhere.

Developer: “We need you to build out a new production environment so we can use containers for our app

Operator: “We’ve looked at this but I’m a little concerned about maturity for  failover and how we can ensure network management

Developer: “Well Docker say its production ready, just read the docs and make it happen

As an advocate of DevOps, I hope the conversation above never happens, but I can totally see it playing out.  Its reckless to label Docker production ready when so many of the elements other than the host that runs containers are woefully unprepared for life running production workloads, here are some notable examples:

Container Linking
In the first talk by Jérôme Petazzoni of Docker, he talked at length on the number of options for container linking (simply getting consistent variables into containers at runtime).  He graded them all and gave only one A-grade – to the piece of code that is months old and completely unproven.  The best way to reinforce that Docker is not production ready is to have a guy from Docker on stage telling us to use experimental techniques to solve basic container management requirements.

Docker’s networking implementation is a mess, I would hate to be responsible for a production network of containers with even moderate levels of tiering and access control.  Its overly nested and difficult to track, audit or enforce any real policies that a conscientious organisation could stand by. Chris Swan put it best when he conceded that networking in containers is only truly resolved when we remove the constraints that IPv4 is imposing on address space and move to a native IPv6 implementation.

Clustering and Availability
The final element is the maturity of the container management space – this is where the real scale will come from with tools like Kubernetes, Fleet and others enabling the abstraction of management of many individual hosts in favour of a scheduling capability that spreads containers over a farm of hosts.  Whilst this code is well battle tested in places like Google, I think there’s an awful lot of work to do in order to get consensus on best practices for implementation and features that will make concepts like anti-affinity and host evacuation much more intelligent and accessible.

Its probably worth pointing out at this point that I love containers and think Docker is one of the most innovative pieces of technology to hit the market since x86 architecture.  I am being hard on containers because I want them to succeed and not rest cosily on a “production-ready” badge that will leave the rest of us scrambling to solve the really hard problems.  They also owe it to the developer community, who have had their productivity fundamentally changed by containers and concepts like Dockerfiles to be more transparent about the changes involved in moving from a single host development environment to a multi-host production environment where there are many more obligations to manage, monitor and collect information.

So Docker, congratulations on being production ready, you’ve built an awesome roller coaster.  I’d like to come to Container Camp 2015 and learn more about the harness you’ve built to keep us all on this wild ride, until then I leave production examples to people that have an iron grip on your technology.

PS – for the avoidance of doubt, I also wholeheartedly endorse the Container Camp event, without it I wouldn’t have the context to make this opinion possible.

PaaS – Solving for the Gap

Forget “men are from mars, women are from venus”, think “Devs are from jupiter, Ops are from Alpha Centauri”. The dichotomy of value between Dev and Ops will continue to exist after a successful implementation of DevOps.  Fundamentally, whilst these groups can be incentivised to want the same business outcomes, their perception of value is still radically different.  DevOps just serves to build a more collaborative model where value outcomes are shared. However, when pushed would a Developer pick stability over self-service?  I’m not so sure…

How does this play out in a business where DevOps is not prominent?  The advances in technology and application architecture just serve to separate the teams further. So how can we find ways to build organic relationships where none exist?  People are getting excited about PaaS again, and most of it is because “put it in the cloud” is getting a bit tired as a value proposition.  But I think there’s an opportunity to make PaaS the meeting point for these two groups.

Point of clarification – I am not saying buy PaaS and you get DevOps.  What I am saying is that if the industry gets PaaS right, then there is the opportunity to use it as a common middle ground while you do the hard work to build collaboration and culture in your teams around it.

PaaS right now is loved by Developers and largely ignored or feared by Operators.  That is largely because they tend to be black box affairs, they offer self-service and a platform to run your application.  But they do this at scale by locking down how the infrastructure runs on the back end.  This tends not to sit with many companies who either have obligations to apply governance and control to the infrastructure or who can make significant cost gains by customising and tuning how it runs.

The open source revolution is turning its attentions to PaaS now, with Solum from OpenStack and Cloud Foundry now really building a free foundation away from the shackles of a single owner.  My hope is that this yields a way of using services that gives value to both developers and operators.  The opportunity to focus on the business value by abstracting infrastructure into business components whilst at the same time maintaining choice and transparency of how and where the infrastructure operates.

PaaS could dominate the next 20 years of cloud and IT services if the vendors understand that it must serve both developers and operators.  If it allows all sides of an IT organisation to leverage value then it has the potential to be a magnetic tool that gives teams a common platform around which they can collaborate.

The biggest risk to this is that developers tend to be the one’s on the edge of innovation with the operators playing a never ending game of catch-up.  If only one side of the DevOps conundrum is represented at the PaaS table when critical decisions are being made, we will miss the opportunity to solve for the needs of all of IT’s consumers.

Time will tell whether the balance is struck and we’re only just starting the next phase of “cloud” as a platform service.  I just hope we learn from what has gone before us rather than persisting an anti-pattern for collaboration.

The Relentless Drive to Abstract

Hands up who still writes applications by sending commands to micro-processors?  I’ll bet its a niche minority.  But at one point it was the only way to write any sort of application!  It proves an unavoidable fact, that technology loves abstraction.  Whether its hardware, services, functions, libraries… we abstract it all to a higher and higher level.

There’s a pretty easy answer to why we abstract… We are drilled to make sure that things we build are repeatable and reusable, that one update applies everywhere.  Why would you write your own interface to a sound card in a Windows laptop when you can exploit the abstraction already provided to you?

So where do we draw the line between things we write and things we abstract?  This line is constantly moving, the rate of technology progress means that innovation drives standardisation and ultimately abstraction – this impacts who you work with as new intermediaries pop up.  How many of you still have direct sales relationships with Intel… in the future how many will have direct relationships with any kind of hardware manufacturer…?

We must also put all of this alongside a deep-rooted human desire for simplicity.  People might admire complexity, but they gravitate to simplicity.  When asked to recall a famous scientific principle which one resonates with you?

Bernoulli’s Principle…

courtesy of Wikipedia

Or perhaps…

Both are world changing theories, but only one is the poster child for the beauty in science.  So the disciplines of science and technology seek to abstract and simplify.  This raises some important questions when we think about our current challenges.

Take for example the new front that is opening up in the cloud battle – PaaS.  Many claimed it dead, but its now the new place for abstraction.  To take complex things like infrastructure and simplify it to services, the operating systems did this, why wouldn’t the cloud?

To pause for a second, let me share my definition of PaaS to clarify – for me, Platform as a Service is complex business services based in IaaS such as authentication or big data or web services packaged up with an API that talks in terms of business value and function rather than the more directive API operations in IaaS.  Why spin up a cloud server, or even worry about the complexity of configuration management for that instance when I can just define a service and interact with it directly?

There’s another post coming on this – but the PaaS battle will be messy.  You have early vendors like Heroku looking to stay relevant, IaaS vendors looking to pass off their offerings as PaaS and new container technology like Docker offering a totally new approach to this problem.  In short, get ready for FUD… lots of of it.

So what does this mean for our appetite for abstraction?  Well abstraction implies standardisation and simplification.  Neither of these are words synonymous with cloud right now.  Could our appetite for innovation and abstraction be outstripping our desire to consolidate, standardise and simplify?  How will businesses being pushed to keep pace with the bleeding edge manage the move to PaaS when the foundations it is built on are anything but firm right now?

Don’t be scared of PaaS… but spend some time standardising the layers it depends on before you go and throw more abstraction into the equation.

Where the Enterprise and DevOps Meet

I attended the CIO Europe Summit on Tuesday, hosted by CDM Media.  There was a first for me at this event…  A CTO for a major global bank gave a 30 minute talk on DevOps to his peers. (In actual fact he credited Rackspace’s awesome YouTube Video as a great starting point).

I’m sure he’s not the first Enterprise leader to talk about DevOps, but he was the first one I had seen and more importantly – he got it.  I felt this newsworthy enough to report!  He identified the challenges of driving cultural change and a shift in mindset as a key enabler.  He was also incredibly frank with the reality that competitors who are smaller will be moving more quickly than them and the challenge was to keep pace and stay relevant.

This got me thinking about where DevOps will come from in the Enterprise.  As a relatively new, but keen member of the community that is being formed, I’d love to think that speakers like this will arrive at all our usual haunts such as DevOpsDays, Velocity, FOSDEM etc…  I think the reality is that Enterprises have some challenges about community engagement that may stop them from appearing right now.  Simple things like restrictions on contributing to Open Source projects because it shows what they are using to develop applications.  Talking at DevOps events because elements of their operations are proprietary or sensitive.  We cannot expect them to change corporate culture on a dime and externalise a ton of information which was previously confidential.

Enterprise DevOps is going to come from the inside – at events where Chatham House Rules can be applied where they can talk amongst peers who genuinely understand and share the scale of their challenge.  As comfort increases, I’m sure more will publicise fully, but right now, its a world that a lot of the DevOps community members do not walk in by choice and in order to influence these organisations we need to find ways to communicate in channels they respect, value and trust.

The other interesting insight is that the view of DevOps being synonymous with Open Source software is going to be challenged by the Enterprise.  They will want to make sure that investments in tools and technology they already have can help them start this journey.  It will test the assertion that DevOps is really about people, culture and collaboration – if they can use the tools of the Enterprise to deliver the outcomes of the Start-Up we’ll really have proved that DevOps is an approach for all to consider.

The final observation is that the most tenured Enterprise IT leaders will not find DevOps all that uncomfortable.  After all, they remember the time when IT was a small team of 20 where you really could shout over your desk and collaborate with another discipline.  IT has a nice way with repeating trends… Imagine what TOGAF will be doing for internet start-ups in 2030!

I’m excited to see what the Enterprises come up with as thinking around DevOps gains maturity and I’m looking forward to sharing that insight with them and understanding how we can advance the entire movement regardless of size.

The Changing Decision Maker

Where I work, I used to spend most of my time with system administrators and IT Managers.  It was a comfortable life where we could indulge in talk of Gigabits and Megabytes, Opterons and Sandy Bridges… Our customers wanted to talk to us about device specifications and getting deep on how the hardware met their business requirements.

Then cloud happened, and it all changed.  Not overnight, but gradually our conversations about hardware architectures got slightly more rare and we started to have this new conversation.  A talk about APIs and SDKs.  Fortunately, my employer is pretty good at that as well, so we clicked into gear and started to have these new conversations.  I skilled up and became useful to these clients again.

What this story highlights though, is the mind shift that is happening behind the scenes in the cloud revolution.  The prominence of the developer as a decision maker, strategist and consultant all in one.  The developer is the edge of the sword in a company, they are the first to put a marker down in new ground and they started to see a world of infrastructure that was accessible and interesting to them, so they picked up their credit card and they used it.

Fast forward 6 months and that same developer is about to perform a production release of their new application.  No RFP happened, there wasn’t an infrastructure consultation or steering group and the incumbent provider of hosting (internal or external) knows nothing about it.  The result?  Success!  The application performs at cost and performance points never seen before and they get changes to market in half the time.  Knowingly or not, that developer has just changed his or her company’s strategy.

Fast forward another 6 months and the 300-page RFP for Cloud Services hits most sales desks, its not come from the developer, its come from the IT team, the team who only last month were still talking about Gigabits and Megabytes.  They’ve been told that they need to get a cloud offering so the developers can use the company’s own service and not these uncontrollable public cloud beasts.  Millions are spent, most are wasted, the developer keeps doing what they did before…

The world changed, no shots were fired but the ramifications will be felt for another 5 years.

If you are a developer, remember Uncle Ben (from spiderman, not the rice guy) – “With great power, comes great responsibility.”  You have the power to change the direction of your company, when you do it make sure you’re setting yourself and your employers up for long term success.

If you are an ops guy, remember Nietzsche – “The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind.”  In other words, this stuff is going to happen, it can happen with you, or it can happen without you.  But you’ll be better off if your talents are part of the solution.