Sherry’s session will look at how automation can transform a Change process from blocker to key enabler. During the presentation, Sherry will look at how automation can support the Standard Change model to enable more Changes to pass through the service pipeline without sacrificing effectiveness, quality or safety. For those of you who are new to the Standard Change model they are simply pre assessed, pre authorised activities that are low risk, relatively common and follow an agreed procedure or work instruction. So far so good right?
Sherry will give practical guidance on setting up your organisation to follow the Standard Change approach and will look at how these virtual quality gates can work as a more efficient approach to Change volumes over human scrutiny. As DevOps becomes the more preferred way of delivering value, Automated Governance will become more and more important in driving Continuous Delivery; Sherry’s aim is to empower attendees by sharing tips, tricks and case studies in making Change quick, effective and successful.
You should attend this session if:
You want an action packed, practitioner overview of how to move to a more Continuous Delivery stream using Standard Changes.
The official bit:
The conference overview of Sherry’s session is below:
‘Change Management and Continuous Delivery are commonly viewed as incompatible. Gates imposed by Change Control Board often slows down any velocity gain achieved by Continuous Delivery. However, control and velocity can be achieved by automation. Attend this session to learn how you can achieve higher velocity, better scrutiny, and comprehensive audit trail with Automated Governance.’
The ITSM Review are pleased to be confirmed as official Media Partner’s for the Beyond20 SIXTEEN ITSM DevOps Conference on the 2nd & 3rd May 2016 in Washington DC.
The conference will delve into a combination of Development and Operations alongside some ITSM best practice in the hopes of giving you the knowledge to make your organisation more efficient and enable you to continually deliver, even in the face of constant change.
You’ll be glad to hear that the two-day event will have 1 session per block which is absolutely PowerPoint free so you don’t drift off into a deep sleep. Instead, the conference will consist of a range of interactive Panel’s and Team sessions. The schedule promises to be innovative and inspiring.
The IT world we know and love exists today thanks to the bedrock of the IT community: ITIL, the IT Infrastructure Library. Since ITIL’s inception 26 years ago, the world has changed and an app exists for everything – shopping, messaging, ride sharing, or just staying connected via social media. We’re in the midst of a new technological age. This evolution has been guided by agile methodology and now, with the rise of cloud computing, many teams are embracing DevOps.
The consumerization of technology is changing expectations of IT. And IT has pressures to live up to these expectations. Because the pace of innovation is largely driven by DevOps and agile methodologies, IT must adapt. To do this, ITIL must support an agile environment. By working together, these practices reinvent how IT teams deliver reliable services to the business, faster.
DevOps and ITIL working together
Developers want an agile process – and it’s best for the organization that they have one. This means having a frictionless release process, and continuously improving software for customers.
ITIL’s framework is hyper-focused on reliable service delivery and support, with its feedback loop based on incident management. ITIL can combine with agile to get the best of both worlds: better software and a reliable, stable environment.
How agile saves the day
Real world example – the Service Desk received reports of a slow loading login page. The underlying issue is confirmed by a bad Apdex score (a user satisfaction score reported by New Relic). The problem might be a runaway query so the development team implements the bug fix into their next sprint, which happen on a weekly basis. From incident to resolution, turnaround time is two weeks.
Using ITIL to support Agile and DevOps
Agile incident management
Maximize your team’s bandwidth with sprint planning. Reserve 30-40% of your team’s capacity for operational tasks, where priority 1 and 2 incidents are resolved immediately, and lower priority incidents are resolved within bandwidth. This means that incident management doesn’t affect sprint goals.
Agile problem management
Trim down on time-wasting administrative work. Manage problems as user stories in a product backlog. Don’t separate “incidents” and “problems” – everything should be cohesive. If a problem occurs more often, it should have higher priority in the backlog.
In ITIL orgs, there’s an assumption you’ll need multiple instances of an incident before starting problem analysis.
Instead of waiting for incidents to pile up, detect and solve problems faster with automated monitoring. Link monitoring tools to your incident management system to identify the cause of problems earlier and get it restored faster.
Agile change management
When it comes to change and releases, many IT orgs drown in bureaucracy related to heavy processes. That can change.
In a DevOps environment, releases are frequent. ITIL framework combined with DevOps means development, operations, and support are always collaborating. It means change requests link from incidents and problems. Issues related to changes are added to a developer’s backlog and allocated to their sprint.
In the end, there’s no budding conflict when it comes to these methodologies. It’s all about making processes leaner, making data visible and enabling faster resolutions. With the right practices, the ITIL framework supports the agility of DevOps.
Sid Suri is the Vice President of Marketing for JIRA Service Desk. He’s worked in various technology roles over the last fifteen years at Salesforce.com, Oracle (CRM), InQuira (acquired by Oracle) and TIBCO Software. An expert in the intersection between IT Support and DevOps, Sid helped create the detailed ebook, “How to Enhance IT Support with DevOps”.
Enterprise Release Management is an increasingly prominent discipline, occupying the intersection of technical release management, project delivery and change management. Its focus is on understanding and governing the full portfolio of multi-stream changes, be they quarterly ERP releases, one-off project deliveries or monthly patching.
The demands on enterprise level release managers are many: governing and managing individual releases, maintaining the forward schedule as far as 12 months ahead, making sure non-production environments are efficiently used and more. Most release managers will have built and refined an array of spreadsheets and calendars to manage everything from release scope, defect lists, release gateway checklists, cutover plans and forward schedules.
Spreadsheets and calendars can work perfectly well when there are only half a dozen releases to track across 2-3 test environments, but once this starts scaling up – especially with multiple release managers – keeping these spreadsheets up to date becomes an administrative challenge and resource drain, letting inevitable errors creep into manual processes.
This is the tipping point where dedicated Enterprise Release Management tools make their case. The initial benefits are obvious: moving spreadsheets online to offer a single version of the truth slashes administrative waste and allows for pivoted views of the same data. Common tasks or release governance structures can be defined and re-used.
Clever reporting can replace hours of spread sheet and Powerpoint wrangling with the click of a button, and this is only scraping the surface. In this review, we’ll see what else leading vendor, Plutora, has built into their tool to add some real intelligence into the process far beyond simply lifting and shifting a spreadsheet online.
Quick facts & review highlights
Plutora V 3.5
Market focus & customer counts
Large/very large IT organisations with a strong or dedicated project delivery arm who are presently struggling with visibility of their forward release schedule, environment utilisation or quality of repeatable release activity.
ASIA PAC: 15
SaaS licenses available in packs of 25 or unlimited enterprise option.
Purpose-Built and Comprehensive: Plutora Enterprise Release Manager enables all of your end-to-end release management processes out of the box. Plutora is differentiated with its capability to combine release management, test environment management, deployment management and self-service reporting in a single comprehensive tool.
Enterprise SaaS: Plutora is 100% SaaS to ensure rapid implementation and adoption of the solution within your organization. Plutora scales in the cloud to meet the growing complexity of your organization as teams become increasingly distributed.
Vendor-neutral integrations: To provide a unified view across all your releases, Plutora integrates seamlessly into your landscape with an open API and adapters to your existing Project Portfolio Management, Application Lifecycle, Quality Management and IT Service Management tools.
Plutora Enterprise Release Manager
Plutora Test Environment Manager
Plutora Deployment Manager
We think Plutora is stronger in…
Conversion of simple, powerful & common tools frequently used (and easily recognised) by release managers into a web application and expanded to make the most of pivotable underpinning data.
Strong & flexible presentation of critical information, both from pre-configured views & reports, and user-built reporting.
Powerful deployment management command & control function.
Clever system impact matrix with regression-test flagging.
We think Plutora is weaker in…
As a release-focused tool, less emphasis on non- transition related IT Service Management information may mean release decisions are taken in isolation and solved problems are not learned. Plutora offers the ability to add customized data fields and comments for non-transition related information.
Not aggregating change/feature resource cost into release-level capacity monitoring (and instead doing this manually) feels like a missed opportunity.
Some medium-sized IT organisations do not have 25 users, Plutora’s minimum license. Less focus on technical release aspects such as build/integration tooling, though this is on the feature roadmap.
In their own words…
Plutora’s purpose-built SaaS solution for Enterprise Release Management, Test Environment Management and Deployment Management enable you to manage complex application releases with transparency and control. Using Plutora, organizations can deliver higher quality software more frequently to meet customer demand with no impact on downtime.
Plutora ensures high-quality, on-schedule releases by driving enhanced enterprise collaboration and coordination for all key elements of a successful release: timing, composition, status, and stakeholders across their lifecycle – with ease. Real-time dashboards show release schedules and how they are tracking according to governance gates within the release framework.
Plutora provides a unified repository for all release information where users can source data, including project dependencies, without needing to piece together the shape of a release from multiple sources. Plutora integrates with your existing IT management tools to ensure that no data needs to be manually re-entered by users.
Over 30 enterprises across the globe as of March 2015, including Telstra, ING Direct, Boots UK, News Corporation, and GSK, manage $5 billion of releases using Plutora.
About this review
This was an unusual review, since Enterprise Release Management is an emergent discipline, combining both technical release management and project-delivery capabilities, but with an operational focus.
As an emergent discipline, there are no standard ways of dealing with the inherent challenges in this field, so the assessment of quality comes both from a mixture of judgements made during the review, in-depth use* and trusted industry awards. In this last category, Plutora has pedigree: named by Gartner as ‘Cool Vendor of the Year’ in 2014.
This review was written on the basis of a maximum 2 hour demonstration of the 5 key capabilities by each of the vendors. It is not exhaustive, and some capabilities which you especially require may be present in the tools but not covered in this review. As such, if you believe that Enterprise Release Management tooling is appropriate for your organisation, it is worth speaking to Plutora to ascertain best fit for your specific objectives.
*and thus not part of this review
Tracking and managing a release with repeatable & templated processes
Tracking the entire release portfolio and presenting this information to diverse stakeholders
Managing resource and environment usage
Using data inside or connected to the tool and built-in intelligence to help inform release activities.
A single tool to remove reliance on spread sheets, calendars or manual processes.
Plutora is purpose built to enable end-to-end release tracking in a single solution. It comprises 3 modules: Enterprise Release Manager, Test Environment Manager and Deployment Manager.
A release in Plutora comprises a number of customer-specified phases that focus on their respective exit gates, and each has a checklist of activities or exit criteria a release manager would need to have completed before moving to the next. For example, a ‘QA’ phase exit gate would be reliant on, say, Completion of Functional Testing, Completion of Performance Testing and Signed Off Test Completion Report as activities required to move to the next phase.
Once a release ‘model’ has been built using these phases and checklists, it is then very easy to clone this to a new release. According to Plutora, many of its customers prefer using this cloning approach to template their releases rather than building dedicated theoretical templates which may themselves require overhead to manage and keep up to date. The cloning approach allows a maturing release management organisation to learn and adapt quickly to changing situations – taking only the elements they know work and evolving them organically.
Additionally, some customers of Plutora also use this cloning feature and general checklist features to build operational maintenance checklists – so, although the tool is heavily targeted at the change delivery side of the organisation, it can also be of significant benefit to operational and technical maintenance functions.
The templating and checklist functionality doesn’t stop there. Implementing a release is another area often devolved to shared spreadsheets, but Plutora delivers not just a single-source-of-truth replacement for these spreadsheets, but in Deployment Manager a clever, real-time command and control capability to let a single release manager monitor, trigger and track deployment steps in multiple releases simultaneously with internal or external delivery teams.
Once the work has been put into ensuring that the individual releases are accurate, the aggregate view starts to take shape and provide value. The Plutora Enterprise Release Schedule provides a tailored view of all releases. The schedule can be detailed, showing all phases, gateways and environments, or quickly summarised into a powerful senior stakeholder view. The schedule also supports diverse delivery approaches, whether agile, continuous delivery or more traditional waterfall as well as the simple operational checklists mentioned earlier.
However release management tooling is not just about visibility of the release schedule or implementing releases effectively. Plutora has two additional features, the release capacity planner and the systems impact matrix which add data-driven intelligence to release management.
The systems impact matrix is a simple-seeming view of dependencies between systems and releases. This on its own is a useful tool, giving a summary of which releases touch which applications. But the really clever bit is how Plutora not only identifies which systems are being touched by the release, but which linked systems are also impacted thus needing a regression test. This feature alone could make the business case to purchase Plutora.
The release capacity planner is also a useful feature. It allows release resource ‘containers’ (eg. number of test cases) to be specified and tracked in an accessible and easily summarised view, letting release managers clearly articulate release capacity. However my only major criticism of Plutora is that this capacity specification is manual and performed by the release manager. Since many ALM tools with which Plutora can share data (eg. Jira) can contain the development & test effort within their own records, it would seem logical for Plutora to take in this change-level data and aggregate it into a total release effort measure (adding extra overhead as necessary for release-level activities). The overall size of the release container can still be defined by the release manager, but the usage of each container could, and in fact should come from the individual change/feature records, and Plutora doesn’t do this. Despite this, the capacity tool is still incredibly useful for discussions with the business about setting realistic delivery expectations and customized fields can be added to incorporate additional information relevant to the release management process.
The last core area of functionality is test environment management. Test Environment Management in Plutora is fairly tightly coupled with the rest of the release functionality in planning and executing releases, but there are a couple of additional features worth noting.
Plutora contains an environment request and approval tracking system to allow projects or releases to book time in specific environments. Combined with the system impact matrix described above, Plutora’s ability to ingest data from external configuration/discovery tools and the ability to define complex environment groups of related systems makes for a powerful management suite to make better use of non-production environments.
The Test Environment Manager also has its own version of the release schedule (but from an environment-centric view) and likewise can be used to easily identify & articulate over or under utilisation at a glance. In addition, by specifying those stakeholders within the tool and enabling message broadcasts, clashing stakeholders can be made aware of contentions and work to resolve the issue.
This feature actually extends throughout all of Plutora. Stakeholders, systems, organisations and more are specified when initially configuring the tool and message broadcasting can be selectively activated at release or environment level.
Finally, reporting. Plutora has obviously invested considerable time and effort in getting reporting right, with pre-configured single-page overview reports providing real value to release managers as well as keeping senior stakeholders happy. The reporting dashboard is also configurable, allowing release managers to build graphs and displays from data within the system and then combining these into a personalised dashboard. This isn’t revolutionary functionality, but it is solid and well executed in Plutora.
Enterprise Release Management tooling is ostensibly about removing the array of spreadsheets that proliferate to manage scope, timelines, environment usage and cutover plans. Plutora not only does this exceedingly well, its also used the opportunity to add some intelligence and polish to the tool to make people’s lives easier and improve the quality of the release passing through it.
Plutora is the tool one release manager would build for another. Plutora has taken existing practices, made them collaborative, structured and business-ready, then extended them to both pre-empt and answer the most common questions asked of release managers or that release managers ask of themselves.
Feature by Feature Summary Scoring
Tracking and managing a release with repeatable & template processes
Tracking the entire release portfolio and presenting this information to diverse stakeholders
Managing resource and environment usage
Using data inside or connected to the tool and built in intelligence to help inform release activities.
A single tool to remove reliance on spreadsheets, calendars or manual processes.
★★★★★ – Advanced features well developed
★★★★ – Advanced features present
★★★ – Solid coverage of basic requirements with some additional/advanced features
★★ – Basic requirements covered, some less thoroughly than expected or with minor gaps
★ – Not all basic requirements, significant gaps
Plutora is the tool which, in the reviewer’s opinion, embodies the term ‘Enterprise Release Management’.
It will work well in busy, large IT organisations and whilst it has a place in supporting operations, it feels targeted firmly at the development/delivery side of the IT organisation where teams of project managers, release & environment managers and more can collaborate with tooling they already instinctively know how to use.
How do organisations plan tens or hundreds of releases a year across project delivery, vendor patching, infrastructure changes and more? How do they manage competition for access to test environments, ensure they spot colliding production releases in good time and avoid overbooking their test teams?
How do they articulate this enterprise-wide release roadmap to senior stakeholders, customers and IT staff?
Traditional answers to these questions usually take the form of project plans and spreadsheets. They rely on regular meetings between project office, operations & technical staff to keep them in sync, and are rarely, if ever, accurate in real time.
Today, a new breed of release management planning tools is emerging. Enterprise Release Management tools are agnostic of functional requirements or constituent change requests, and they don’t manage the actual deployment of code. They simply allow the entire IT organisation to track and manage the entire portfolio of releases across all environments. They have the scope breadth of a Change Schedule, but go into more detail.
At their simplest, they are a single source of the truth for the multitude of spreadsheets they replace, but most can pivot this data to provide people with the information they care about in customised and intuitive views – from CIO roadmaps to a test manager’s forward work plan.
Ultimately, they give Service Operations a reliable, realtime view of all upcoming releases with at-a-glance assurance that the right governance has been completed for each. And since they span both development and operations, many are starting to be called DevOps Release tools.
What does an Enterprise Release Management tool do?
Plans (and scopes) a release – Allows the construction of an end to end release plan following a user-customisable structure which could map to eg. an organisations’ project governance gateways. Should be able to record both governance activities/milestones as well as physical activities in multiple environments (deployments, test runs etc). Ideally should be templatable and re-usable.
Plans ALL releases – Takes the individual releases and plots them against a common timeline to spot resource over/under utilisation, go-live collisions and tells operations when to brace for action.
Manages environment & resource usage – Pivots the data from all releases and show an environment – or resource -centric view of the same data. Helps answer questions such as “what’s happening in our Pre-Prod environment next week?” or “can I deliver everything I promised?”
Presents data in various views depending on audience – The steering committee has different needs to those of a test manager, and the project needs to be able to see anything relevant with a few clicks. Does the tool allow varying levels of detail to be presented over user-defined timescales in a clean and coherent way no matter the format?
And not forgetting… – Role based access to stop people from seeing the wrong things (or changing them), the ability to dynamically import and update change requests from other tools (data exchange mechanisms such as XML and RESTful APIs are becoming the norm in service tools).
To test these, we’re constructing an entire fictitious company with a busy year of releases including new system deliveries, infrastructure refreshes, monthly & quarterly patching to cloud and on-premise services. We’re covering both agile and waterfall development & delivery methodologies, and even introducing some DevOps practice. We’re sharing this case study with the participating vendors, and we’re also going to make our own spreadsheet versions of the plans (which we won’t share with the vendors in advance). Our case study also includes some fairly thorny problems which a typical organisation could encounter eg. scheduling conflicts, people not following process and people whose idea of planning is far removed from the reality of their customers’ needs.
I was recently challenged by Mike Orzen (co-founder of Lean in IT practises and my mentor) to answer a simple question: what do you think the purpose of change and release management is in ITIL or any other IT best practice framework?
I started by asking what aren’t they?
Change is not about doing the change, and release is not about managing the approval of a request to change. Change helps me make a decision; it answers the question WHY with a “yes” or “no”. But “yes” or “no” to what?
How many times has a request been approved, but what was delivered did not match what was approved? If IT has no value until it releases something that is usable to a customer, we better be sure that “yes” and “approved” are used for getting an organisation to be competitive, compliant, reliable, secure and cost-efficient as quickly as possible. Lean helps by creating a value stream from idea to solution, in a similar fashion to the ITIL lifecycle of service strategy to service operation. In both cases, the solution to the customer needs to be delivered as timely as possible.
You can’t manually approve every request as this would block the flow in the IT value stream. So the creation of standard change types assist in identifying low-impact, repetitive, and easy to fix types of requests. LeanIT likes standard work, as once you know if the request or change will not place the organisation at risk of losing a customer or wasting money, you can then automate the decision process to flow the request to the design phase, if required. If it will impose a risk or loss, then the request can be routed to a more formal approval process that can also be leaned over time.
Change should control every aspect of a release (the doing process of an approved change), so we have to look at all of the places change gets involved to help design a fast, flowing stream across IT, and ultimately one that works from the customer (pull) instead of IT pushing releases to the customer.
So where does or should change get involved?
The above could form the basis of a release process. I am sure more questions are needed, but if we allow the various teams to continuously improve the above, we can release valued services into the organisation. The teams might use lean methods such as kanban boards to control work, kaizen to improve work and agile or DevOPS to get services developed and agreed. Another aspect of lean that the table demonstrates is waste removal. If the change gateposts help to reduce defects, re-work, wait time between tests via automation or script reuse, for instance, then the flow of the value stream is enhanced end to end. Removing or automating/facilitating the gates in a formal process will also help increase flow resulting in a better time to market, quality enhancement, productivity improvement and cost reduction.
Configuration management – the needed process for ITSM & lean success
To be effective (first) and efficient (second), we need data. Where are requests, business cases, regulatory and architectural requirements for design, code, tests, or service acceptance criteria kept for example? We turn data into information to gain knowledge to deliver value. Configuration management is the data to knowledge management process. The information in a configuration management database (CMDB) can be used to enhance the way a process, team or tool performs. For instance, if we create a cycle of CCRCCR: (change to configuration to release to change to configuration to release…) to be as fast as possible; then the agility of creating solutions in a timely manner becomes our standard culture or way of working.
How do we start?
I suggest by mapping the value stream, as much as possible, from end to end. At first you may only be able to do the parts internal to IT but keep adding until you have the entire value stream from requester to customer mapped. Lean value stream mapping helps improve how an IT organisation, business enterprise and partners create and improve ways of work. Get as many representatives as possible involved in a mapping exercise and use post-it notes to visualise the current way of working. Try to get the people that do the work involved as this generates buy-in for future change improvements. Your post-it notes could include time of steps, teams involved, tools used, etc. Don’t trust what you create in a conference room. Go out and see (lean calls this “gemba”) to validate your understanding.
Now return to the conference room armed with your knowledge and improve the flow of the stream (steps). Add a few measures to control the flow of the stream and most importantly BEGIN. Don’t wait for the tool changes or other procrastination reasons: start using the new way. Check how changes are approved, the steps performed to create a release, the results of any improvement (agreed and tracked) and use the CMDB to maintain the information such as your review of other ITSM processes. You can continue to create a unified view of your IT practices, processes, tools, capabilities, etc. The lean trick is to make checks or improvement a daily part of work, not something owned by the program team, but by the people doing the activities all along the stream. Let them own and celebrate the success.
Set some stretch goals for how long it should take to agree a requestor, how fast to perform a release etc. Look at quality, productivity, stock reduction (number of tests or environments needed) as examples. PLEASE note that cost is a benefit and if you see that as a target it may be viewed as a job-cutting exercise when it should be viewed as a job enhancement opportunity.
Please let me know what you think and try blending Lean into your ITSM world. Have fun doing it!
Following on from my trip to itSMF Norway last week, I wanted to share with ITSM Review readers my thoughts on Gene Kim’s presentation “The Phoenix Project: Lessons Learned in Helping Our Businesses Win”, along with some of the key pieces of advice that he presented.
Gene kicked off the first full day of the conference with his keynote presentation about IT and DevOps. If you’re not familiar with his book then I’ll start by highly recommending that you head over to Amazon to purchase a copy. If my recommendation alone isn’t enough to entice you to part with your hard earned cash, then read this article by Gene first.
Gene’s article provides a good summary of his session (along with some great tips), but the bottom line of the presentation was that (and to quote Gene) “IT is in a downward spiral, it’s trapped in a horror movie that keeps playing over and over again” and DevOps is a way to help try fix this.
Advice from Gene
Some of the advice that was provided during his session included:
Never forget that the best will always get better. Back in 1979 who’d have thought that anything could surpass the amazing Sony Walkman?
In order to win in business we need to out experiment our competitors.
Be fearless in breaking things. Mistakes and errors are a key source of learning
When it comes to DevOps and metrics, measuring lead time (i.e. the time it takes to go from the “raw materials” to “finished goods” is a much more effective metric than measuring deploys per day
When creating a DevOps process it’s important to ensure that you include a “handback” stage. This way, if necessary, fragile services can be returned back to development if operations don’t think that they are up to scratch
Develop smaller changes frequently to avoid painful large scale deployments in the future
Other things we learnt in this session that you might not know:
A survey of the room showed that it took most months, and even quarters, to deploy a change request. Did you know that an effectiveDevOps team can deploy a change request in days, and even hours?
Overall a thought-provoking presentation, and one that I very much enjoyed. Not being a total ‘techie’ I confess to never really, fully understanding the concept of DevOps before. Now thanks to Gene, I think I might even be able to confidently explain the benefits to others.
In my thirteen year journey of studying high performing IT organizations, I’ve started to see a new and unsettling trend. Whenever I mention ITIL and IT Service Management in presentations and briefings, people in the audience snicker. When I ask why, they roll their eyes, and talk about the shrill, hysterical bureaucrats that suck life out of everyone they touch, doing everything they can to slow the business down, preventing everyone from getting work done.
This is simply not true. In fact, every time I’ll argue that ITSM skill sets are ever more important in a world where there is an ever quickening business tempo.
However, an even more troubling trend is that ITSM practitioners will dismiss emerging movements such as “DevOps,” suggesting that it’s a passing fad.
It is my genuine belief that that patterns and processes that emerge from DevOps are the inevitable outcome of applying Lean principles to the IT value stream. It is an inexorable force that will likely change IT in a manner we haven’t seen since the birth of client-server computing in the 1980s.
More importantly though, ITSM practitioners are uniquely equipped to help in DevOps initiatives, and create value for the business.
The DevOps Movement fits perfectly with ITSM. My goal is to help you become conversant with DevOps and aid you in recognizing the practices when you see them. I hope this article will illustrate how information practitioners can contribute to this exciting organizational journey.
What Is DevOps?
The term “DevOps” typically refers to the emerging professional movement that advocates a collaborative working relationship between Development and IT Operations, resulting in the fast flow of planned work (i.e. high deploy rates), while simultaneously increasing the reliability, stability, resilience and security of the production environment.
Why is it that Development and IT Operations are singled out? Because that is typically the value stream that is between the business (where requirements are defined) and the customer (where value is delivered).
The origins of the DevOps movement is commonly placed around 2009, as the convergence of numerous adjacent and mutually reinforcing movements, most notably the “10 Deploys A Day” presentation given by John Allspaw and Paul Hammond and the Agile system administration movement (Patrick DeBois).
Currently, DevOps is more like a philosophical movement, and does not yet have a precise collection of practices, descriptive or prescriptive (e.g. CMM-I, ITIL, etc.). On the other hand, it is an incredibly vibrant community of practitioners who are interested in replicating the performance outcomes and culture described so vividly by organizations such as Etsy, Amazon, Netflix, Joyent and so forth.
DevOps aims to address a core, chronic conflict that exists in almost every IT organization. It is so powerful that it practically pre-ordains horrible outcomes, if not abject failure. The problem? The VP of Development is typically measured by feature time to market, which motivates as many changes, as quickly as possible. On the other hand, the VP of IT Operations is typically measured by uptime and availability.
Until very recently, it was impossible to get both desired outcomes of fast time to market and sufficient reliability and stability. Because of these diametrically opposed outcomes (“make changes quickly” vs. “make changes very carefully”), Development and IT Operations were in a state of constant inter-tribal warfare, with ITSM practitioners put right in the middle.
Although many people view DevOps as backlash to ITIL (IT Infrastructure Library) or ITSM, I take a different view. ITIL and ITSM still are best codifications of the business processes that underpin IT Operations, and actually describe many of the capabilities needed into order for IT Operations to support a DevOps-style work stream.
I am part of a team who wrote “The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win”, which codifies the “good to great” transformation we’ve observed these organizations making. Our goal is to create a prescriptive guide that shows how Development, IT Operations and ITSM practitioners can work together to create phenomenal organizational outcomes that none of them could achieve alone.
What are the unpinning principles of DevOps?
In “The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win” we describe the underpinning principles in which all the DevOps patterns can be derived from as “The Three Ways.” They describe the values and philosophies that frame the processes, procedures, practices, as well as the prescriptive steps.
The First Way emphasizes the performance of the entire system, as opposed to the performance of a specific silo of work or department — this as can be as large a division (e.g., Development or IT Operations) or as small as an individual contributor (e.g., a developer, system administrator).
The focus is on all business value streams that are enabled by IT. In other words, it begins when requirements are identified (e.g., by the business or IT), are built in Development, and then transitioned into IT Operations, where the value is then delivered to the customer as a form of a service.
The outcomes of putting the First Way into practice include never passing a known defect to downstream work centers, never allowing local optimization to create global degradation, always seeking to increase flow, and always seeking to achieve profound understanding of the system (as per Deming).
The Second Way is about creating the right to left feedback loops. The goal of almost any process improvement initiative is to shorten and amplify feedback loops so necessary corrections can be continually made.
The outcomes of the Second Way include understanding and responding to all customers, internal and external, shortening and amplifying all feedback loops, and embedding knowledge where we need it.
The Third Way is about creating a culture that fosters at two things: continual experimentation, which requires taking risks and learning from success and failure; and understanding that repetition and practice is the prerequisite to mastery.
We need both of these equally. Experimentation and risk taking are what ensure that we keep pushing to improve, even if it means going deeper into the danger zone than we’ve ever gone. And we need mastery of the skills that can help us retreat out of the danger zone when we’ve gone too far.
The outcomes of the Third Way include allocating time for the improvement of daily work, creating rituals that reward the team for taking risks, and introducing faults into the system to increase resilience.
What Are The Areas Of DevOps?
We divide up the DevOps patterns into four areas:
Area 1: Extend Development into IT Operations:
In this area, we create or extend the continuous integration and release processes from Development into IT Operations, integrating QA and infosec into the work stream, ensuring production readiness of the code and environment, and so forth. The steps include:
Create the single “repository of truth” containing both the code and environments
Create the one-step Dev, Test and Production environment build process
Extend the deployment pipeline processes into production
Define roles and integrate QA, Infosec, Ops/CAB into Dev workstream
First, we put everything needed to rebuild the service into a common repository from scratch, including both the application and the environment (i.e., operating system, databases, virtualization, and all associated configuration settings).
Next, we will make a one-step environment creation process available at the earliest stages of the Development project. By using a common build process and requiring that Development be responsible for ensuring that the code and the environment work together, we’ll have an unprecedented level of production ready, even at the earliest stages of the development project.
This impacts the ITSM process areas of release, change, and configuration management. The ways that ITSM practitioners can actively integrate into the DevOps value stream includes the following:
Find the automated infrastructure project (e.g., puppet, chef) that provisions servers for deployment. We can help that team with our release management readiness checklists, security hardening checklists and so forth, integrating them into the automated build process.
Define pre-authorized changes and deployments, and ensure that production promotions are captured in a trusted system of record that can be reviewed and audited.
Define changes and deployments that require authorization, such as security functionality that is relied upon to secure systems and data (e.g., user authentication modules). The goal is to ensure that changes that could jeopardize the organization (e.g., the infamous 2011 Dropbox failure where customers discovered that authentication was disabled for four hours) never occur.
Area 2: Create IT Operations feedback into Development
The steps in this area ensure that information from IT Operations is radiated to Development and the rest of the organization. IT Operations is where value is created, and this feedback is required in order to make good decisions.
The specific steps in this area include:
Make all infrastructure data visible
Make all application data visible
Modify the incident resolution process and blameless post-mortems
Monitor the health of the deployment pipelines
The first step overlaps with the ITSM process areas of event management, while the second step requires creating the monitoring infrastructure so that there’s no excuse for developers not to add telemetry to their application (e.g., “since it only requires one line of code, even the laziest developer will instrument their code”).
The third step then enables IT Operations and Development to resolve incidents quickly, by ensuring that all relevant information from the entire application stack is at hand to determine what might have caused the incident, and then to restore service.
ITSM practitioners can help by ensuring that the process areas of event management, as well as incident, problem and knowledge management are modified to incorporate Development.
Area 3: Embed Development into IT Operations
According to the Second Way, the goal of steps in this area is to create knowledge and capabilities where we it is needed, and shorten and amplify feedback loops. A delightful quote that frames this comes from Patrick Lightbody, CEO, BrowserMob. He said, “We found that when we woke up developers at 2am, defects got fixed faster than ever.”
To facilitate creating tribal knowledge within IT Operations and shared accountability for uptime and availability with Development, the steps in this area include:
Make Dev initially responsible for their own services
Return problematic services back to Dev
Integrate Dev into the incident management processes
Have Dev cross-train Ops
Area 4: Embed IT Operations into Development
This area is the reciprocal of Area 3, and the goal is to create the service design and delivery equivalent of designing for manufacturing (DFM). In plant engineering, DFM recognizes that the primary customer of engineering is the manufacturing personnel, and therefore one of the engineering goals is to parts are designed for easy assembly, minimizing the likelihood of putting on parts on backwards, over-tightening, being damaged during transit or assembly, and so forth.
Similarly, in addition to ensuring that IT Operations needs are integrated into the daily Development processes of design, requirements specification, development and testing, the product and processes are designed with resiliency in mind.
The steps in this area include:
Embed Ops knowledge and capabilities into Dev
Design for IT Operations
Institutionalize IT Operations knowledge
Break things early and often
This includes embedding or liaising IT Operations resources into Development, creating reusable user stories for the IT Operations staff (including deployment, management of the code in production, etc.), and defining the non-functional requirements that can be used across all projects.
It is my firm belief that ITSM and the DevOps movement are not at odds. Quite to the contrary, they’re a perfect cultural match. As DevOps gains momentum I’m excited by what we can achieve using a winning combination of the two. It is my sincere hope that by reading this article, you’ll better understand what DevOps is, see why it is important and be energized by the possibilities it creates, and generate some ideas of how to put some of these practices into place in the IT organizations you help support.
In May 2003, Nicholas Carr wrote a Harvard Business Review article entitled “IT Doesn’t Matter”. In it Mr. Carr proposed that IT was, and remember this was just after the dot.com bust, being marginalized and could be thought of as a commodity.
Seems that thinking hasn’t changed much in the past 10 or so years. IT is challenged daily to just keep the lights on, at best, and, if all goes well, maybe try to keep up with the needs of the business much less get ahead of the game.
For those of us who are immersed in IT Service Management, that thought, at times, is a bitter pill to swallow. It is true to that the table stakes for IT is to maintain and manage operational stability but there is more to a day, week or month in the life of IT than KTLO. If we truly embrace the notion of a service – “delivering value by facilitating customer outcomes” – then staying abreast of or anticipating and preparing for the future of the business is or should be the IT mantra. The question is can IT do both?
Gene Kim, Kevin Behr and George Spafford recently published The Phoenix Project. Their book develops a landscape of principles and practices that attempt to answer that question. The book, written as an allegory, focuses on the trials and tribulations of Bill Palmer, recently named VP of IT Operations at Parts Unlimited Inc.. From day one on the job Bill is challenged to first stabilize operations AND deliver on a mission critical project – a project that could spell disaster if it fails. As the story unfolds the authors highlight ideas that should be on every IT managers improvement opportunities list. I would think everyone would like to get a peek at practical advice for how to deal with:
Demanding business leadership
Overwhelming project list
At the upcoming Pink Elephant IT Service Management Conference, I will be presenting Sunday afternoon and again Wednesday morning some of my insights from the book.
There are many great discussion topics interlaced throughout the story. My focus during the session will be laser in on the results of when Bill reluctantly falls under the guidance and tutelage of Eric Reid, a candidate for the Board of Directors. Eric leads Bill through a set of hands on exercises to learn some key principles instrumental to elevating IT’s overall performance. Of the many insights, Eric continues to hammer home the need to focus on Bill finding ways for IT embrace the “3 Ways”.
First Way – Create a fast flow of works as it moves from Development into Operations”
Second Way – Shorten and amplify feedback loops to fix quality at the source and avoid rework
Third Way – Create a culture that simultaneously fosters experimentation, learning from failure and understanding that repetition and practice are prerequisites to mastery.
So why read The Phoenix Project
I have been recommending to my Pink Elephant clients to pick up a copy of the book and add it to their nightstand reading. Several reasons for this:
I’m sure you will find yourself at some point seeing your own situation through Bill’s eyes. I found the experience of reflection on the challenges Bill was having and some “ah-ha” solutions the authors brought forward would highly instructive, especially as conversation starters for ITSM teams at various stages of their program.
Many of the ideas that are being kicked around today in the blog-o-sphere and water cooler talk are fleshed out in a practical setting. Granted the circumstances don’t exactly match what my clients are dealing with but it isn’t a huge leap to find resonance with how the practices can be incorporated in their own ITSM program.
Lastly, it is a story after-all. One that we have all lived through to some extent. An entertaining read and, as one side note, there is some visceral pleasure in seeing the antagonist getting her comeuppance.
Why attend my session?
My focus for this session was to distill the many points and concepts that Bill and his team use to solve their challenges into a pragmatic approach for your ITSM program.
During my sessions I will dig deeper in to each of the three ways. For instance the in the First Way we will learn how IT must understand the 4 types of IT work and how that work is managed through what I call “the Funnel and the Pipe” or the IT Value Stream. In the Second Way we will talk about the “Tyranny of Technical Debt”, its sources and potential ways to avoid it. And finally my discussion of the Third Way will encompass Improvement Katas and DevOps.
I hope that you will add one of my sessions to your Conference Optimizer. If we don’t get a chance to connect during my workshops, then look for during the networking events each night.
This will be the best Pink Elephant Conference yet! I look forward to meeting you in Vegas – see you there.