Contact ITInvolve
x 


In the spirit of the holidays, we present to you:

The Twelve Days of DevOps

On the first day of DevOps
I finally achieved:
Collab’rative IT

On the second day of DevOps
I finally achieved:
2 Git commits
and Collab’rative IT

On the third day of DevOps
I finally achieved:
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the fourth day of DevOps
I finally achieved:
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the fifth day of DevOps
I finally achieved:
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the sixth day of DevOps
I finally achieved:
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the seventh day of DevOps
I finally achieved:
7 Passed Phase Exits
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the eighth day of DevOps
I finally achieved:
8 Milestones Accomplished
7 Passed Phase Exits
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the ninth day of DevOps
I finally achieved:
9 Test Suites Tested
8 Milestones Accomplished
7 Passed Phase Exits
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the tenth day of DevOps
I finally achieved:
10 Bugs Resolved
9 Test Suites Tested
8 Milestones Accomplished
7 Passed Phase Exits
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the eleventh day of DevOps
I finally achieved:
11 Kanban Boards
10 Bugs Resolved
9 Test Suites Tested
8 Milestones Accomplished
7 Passed Phase Exits
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT

On the twelfth day of DevOps
I finally achieved:
12 Happy Biz Execs
11 Kanban Boards
10 Bugs Resolved
9 Test Suites Tested
8 Milestones Accomplished
7 Passed Phase Exits
6 Chef Recipes
5 Good Deploys
4 Gantt Charts
3 Jenkins Builds
2 Git Commits
and Collab’rative IT !!!

This is Part Four in a four-part series. In Part One, I focused on the critical first step of defining DevOps with a purpose by thinking about DevOps in the context of your organization’s applications. In Part Two, I provided four tips to fostering a DevOps culture in your organization. In Part Three, I discussed the role of tools and how they can amplify (or constrain) individual and team abilities as well as work across teams.

In this final fourth part of the series, I’m going to weave the three previous topics of (A)pplications, (C)ulture, and (T)ools together and will show how you can ACT with purpose to start your own DevOps transformation. Your DevOps strategy should incorporate all three aspects of applications, culture, and tools. Given the breadth of those three areas, thought, it’s easy to feel overwhelmed from the start. Here’s a tip I learned from a mentor a few years ago. Whether you are a CEO, VP, or team leader, smart leaders set a vision (often at least two or three years out) and then they make small decisions every week with the goal of generally moving the organization or team in the direction of the vision over time.

I recommend first that you set a vision for DevOps as it relates to your Applications, your Culture, and your Tools. Let’s begin with Applications. In the first article, I shared the concept of the pace-layering model developed by Gartner. To set a DevOps vision for your applications, place each of your current applications into one of the three layers – system of record, system of differentiation, or system of innovation. Then, do the same thing for any applications you are planning to add to your portfolio.

With that context, specific to your organization, formulate a strategic vision for how DevOps will apply to your systems in each layer. Will you seek to have DevOps principles for continuous integration and continuous delivery in place for all of your systems of innovation, and if so, by when? How about for systems of differentiation? Will DevOps be the norm for each of those applications or just some, and by when? Then ask yourself the same questions about your systems of record.

Next, turn to your culture. In Part two, I discussed the importance of baselining the degree of collaboration you have between the business, dev, and ops as well as how you view work (in workstations or in end-to-end flow terms). Then I challenged you to incentivize people to modify their behavior both with respect to their roles and as individuals. With this as context, define your DevOps cultural vision. What does collaboration between the business and development look like in the future? What does collaboration look like between development and operations? How about between dev, QA, and ops? And let’s not forget security.

How will you recognize when your culture is optimized for flow? For example, I recommend documenting a set of indicators that you can measure at regular intervals (e.g. quarterly). Here are three good ones. How much work-in-process (WIP) do we have for Application X today vs. how much was there a quarter ago? What is the average time it takes from the definition of a requirement to its deployment in production today vs. a quarter ago? How often did we make updates to Application X in production this quarter vs. last quarter?

Now, let’s consider tools. Take a baseline of the tools you are using today for each step of the typical DevOps tool chain from requirements management to source code control and bug tracking through to build management, automated testing, configuration management and deployment. And don’t forget project management tools too. What is the state of your tool investment in each of these areas? Do you have gaps? Do you have multiple tools in the same area, and, if so, is that okay or do you want to standardize on one? Do your tools vary based on the applications they are used with (see pace-layering discussion above)?

Then take an inventory of your collaboration tools across dev and ops. Are you still mostly using email and sharepoint sites? What role does instant messaging play in your dev and ops teams today? What about conference calls and meetings? Once you’ve baselined where you are today, set a vision for where you want to be in the future. Are you looking to reduce the need for conference calls and meetings, and if so by how much (time or average number of participants)? What issues are you running into around email and IM communications that need to be addressed and also for SharePoint? Have you looked at the emerging class of DevOps collaboration tools and concepts such as ITinvolve and ChatOps? What role can they play in streamlining communications, keeping people informed, and presenting information to staff so they don’t have to go hunting around for it in dozens of systems or call more meetings?

With this as background, define your DevOps tools vision by setting priorities for filling identified gaps and rationalizing existing investments, then set a general target time frame for each area.

Now that you’ve set a vision in each area – (A)pplications, (C)ulture, and (T)ools, it’s time to ACT. I recommend you first pick one or two applications to begin your DevOps journey. Ideally, they should be applications that are visible to the business and where you can lay claim to the impact DevOps can have on business goals like time-to-market and responsiveness to customer needs.

Once those applications are identified, take an inventory of all the people across your organization (including the business), those responsible for the development and delivery of the applications. Develop a specific plan for how you will educate those individuals on DevOps principles (e.g. attend DevOpsDays, read The Phoenix Project, read a sets of articles and watch a set of seminar or demo recordings), and also develop a specific plan for how you will incentivize behavior changes for their roles as well as what is expected of first line managers to take individual personalities, skills, and experiences into account.

Let me also stress that you shouldnt ignore the people who arent participating in this first DevOps effort as part of your communications. Educate them as well, for example through an all-hands meeting and an email from leadership. Explain why the company is investing in DevOps transformation, why these applications have been chosen, and your long-term vision across Applications, Culture, and Tools to realize the transformation. Help them understand where the organization is going and why, and how it may impact them in the future. Avoid the perception that “the best people were picked for this and you are not among them” or that this group of people is being given license to do things that others can’t (e.g. make changes outside of the standard change management process) – otherwise, you risk reducing morale, good people leaving, and even sabotage of your DevOps efforts.

Once you’ve completed the above, map the relevant tools you have in place today for managing the development and delivery of these applications as well as the tools on your vision roadmap defined above that will be added or rationalized in the next six months.

By following the above recommendations, you will now have an action plan for the near term and a long-term vision you can use to guide daily decisions to move you closer to that vision. Lastly, don’t be rigid about your short-term plan or long-term vision, recognize that everyone is learning, including you, and that one of the key principles behind DevOps is the feedback loop!

Best wishes on your DevOps journey!

Matthew Selheimer
Chief Technical Evangelist and SVP Marketing

This is Part Three in a four-part series. In Part One, I focused on the critical first step of defining DevOps with a purpose by thinking about DevOps in the context of your organization’s applications. In Part Two, I provided four tips to fostering a DevOps culture in your organization.

By now you’ve hopefully noticed the emphasis on “your” in this series, because, at the end of the day adopting DevOps is about your business, your applications, and your culture. In this third part of the series, I’m going to discuss your tools.

In IT, we’re indundated with tools. Developers have their favorite tools and sys admins do too, so do the project office, the service support group, and the QA team. There are IT tools purchased by our organization years ago that we are using and others we aren’t, tools we’ve recently started using, and others we are considering at any given point in time. There are also general-purpose tools the company wants us to use for things like document sharing, instant messaging, and so on. It’s got to the point where a lot of IT organizations say, “we have two of everything like Noah’s ark”, yet they still want more tools.

Part of the reason is that, as IT practitioners, we like gadgets and tools. We like the fact that they can help us do things and can amplify our own abilities, but we also know they can fragment information and can blindside us when someone uses a tool to do something that others weren’t aware of.

Do we really need more tools to do DevOps?

DevOps has a close association with the Agile Software Development movement, and one of the core tenets of the Agile Manifesto is to value “Individuals and Interactions over processes and tools”. Nevertheless, it’s hard to find a DevOps talk or article that doesn’t discuss the need for tools like git, docker, jenkins, puppet, chef, and so on. The reason for this is actually pretty straight-forward: continuous integration and continuous delivery are best done through automation so we can follow a repeatable method and roll things back quickly if there are issues.

At a more basic level, however, we need to acknowledge that there are really two types of tools:

1)   Tools that help us amplify our abilities as individuals (e.g. I can drive a nail into a board with a hammer much more effectively than with my hand or a rock because of the strength of the hammer’s material and the principle of leverage)

2)   Tools that help us coordinate work across many humans, thereby, amplifying our individual abilities (e.g. I can build a house much faster and with better quality by leveraging different experts’ skills and using a shared set of blueprints for building construction, plumbing, electrical, heating/cooling, and so on)

The same distinction holds true for software development, deployment, and operations. One individual can theoretically gather requirements, build the software, test the software, deploy the software, and support the software while simultaneously project manage their personal time. Most computer science students have done this in fact at one time or another, but when we are talking about enterprise applications that support core business functions it clearly doesn’t make sense because of the amount of effort required at each step and the specialized skills necessary for a high level of competency in each area.

What this means is that we absolutely need tools that amplify our abilities, or that of our individual teams, i.e. the hammers; but we also need to invest in tools that help us coordinate work across teams, i.e the shared sets of blueprints.

Tools That Amplify Individual / Team Abilities

A lot of the tools attention in the DevOps community has been focused on: source code management systems like git or bitbucket; requirements planning tools like jira or rally; build tools like Jenkins; automation tools like puppet, chef, and ansible; and project management tools like MS project or AtTask. These tools help us amplify work in each of these respective areas just like hammers, saws, drills, and screwdrivers perform specific functions when building a house and often follow a natural sequence (first you hammer boards in place to create a wall, then hammer on the sheet rock, then saw or drill holes in the sheetrock, screw in electrical outlets and fixtures, and so on.)

Just like building a house has a natural sequence, so does the sequence of tools from code to deployment and it’s often referred to as the DevOps “tool chain”. This brief article isn’t intended to cover each area of the typical DevOps tool chain (I could have also added bug tracking, automated testing, and other categories too), the point is you need to take time to define what the DevOps tool chain should be for your organization and to do so intentionally with a purpose, taking into account the tool investments your organization already has, the needs and skillsets of your organization, and your own practical budget realities.

Do you have experience in using and self-supporting open source tools or do you have a preference for commercially provided tools?, Are you comfortable having multiple requirements tools in different teams or business units, or do you want a single standard? Only you can answer these questions for your organization. There is no prescriptive formula for these types of DevOps tools, although I’ve mentioned a number of commonly used ones.

Tools that Amplify Work Across Teams

 The second area you should focus on are tools to help coordinate work and amplify our abilities across teams, i.e. bridging the inherent gaps in the DevOps tool chain, a subject that has received less attention in the DevOps community to-date. This makes sense because it’s human nature to think first about our own function and team. However, this silo thinking is one of the core problems DevOps was developed to address; the long-time focus on tools for specific IT “work stations” has actually entrenched the cultural silos of development, QA, operations, and project office in IT. For example, most IT organizations have two totally separate systems for tracking issues – in operations, it’s the service desk application and for code issues in development it’s the bug tracking application – and there is little or no relationship between them. In fact, you are lucky if an incident in the service desk can be tied to a bug tracking number.

If we are adopting DevOps in order to optimize the flow of new software releases to support business agility, then we need to look at how we will coordinate work and avoid information distortion and loss when different teams are using different, disconnected tools. In addition to your tool strategy for optimizing individual DevOps workstations, therefore, you also need to have a strategy for the tools that are going to help you effectively manage flow across those workstations.

Most DevOps transformation efforts start small with a core team of perhaps a dozen or so members, and their often located in one physical location. In this scenario, you might be able to make do with regular face-to-face meetings and general-purpose communication tools like email, instant messaging, and a SharePoint site or Wiki with MS Office documents. The scope of your cross-team collaboration tools aligns with the scope of your DevOps effort and you might be able to rely on people interactions to unify the tool chain.

But as you scale your DevOps efforts to larger, distributed teams across time zones and multiple business units and applications, your cross-team tools approach should scale as well. The point-to-point nature of instant messaging, ignored reply-all emails, and SharePoint ‘document dumps’ aren’t effective in coordinating the efforts of dozens or hundreds of developers, testers, admins, and project managers. Information gets lost, information gets overlooked, and the gaps in your tool chain expand. A feature misses a commit and doesn’t get into the build and isn’t deployed. An operational requirement is missed so the test environment isn’t configured properly and the release gets pushed. Three weeks are spent solving a performance issue through code when it could have been addressed faster via hardware. A legacy application dependency in production is overlooked and the new mobile app doesn’t work in production as it did in test.

There is an emerging class of purpose-built IT collaboration tools, such as ITinvolve, that enable the creation of cross-team workspaces where information is proactively shared as IT teams and tools get work done. This helps large, distributed cross-functional teams collaborate more effectively with each other in-context to raise questions, provide answers, and incorporate what’s happening up and down the DevOps tool chain in their individual work.

For example, the developer will have better visibility into when the next build is going to occur and can ensure the feature is committed in time or can reach out to the build engineer in the workspace to request a delay in order to ensure the feature makes it in. The operations engineer can have earlier and better visibility into operational requirements so he can ensure the test environment is configured properly and avoid unnecessary delays. The legacy application dependency in production can be reproduced in test to ensure the application works properly once moved to production. And so on. By eliminating the handoff gaps and information loss across the DevOps tool chain, you can reduce risk of communication issues, solve problems more holistically rather than individually, and better achieve the goal of improving business agility.

In the final part of this series, I will bring together the themes of the first three articles to demonstrate how you can chart a DevOps plan based on your Applications, your Culture, and your Tools to A.C.T. with greater agility.

 

Four practical tips to fostering a DevOps culture in your organization.

Today, we find more and more businesses putting pressure on IT to move faster, and, if their IT department struggles to keep up, they are often willing to “go around them” and invest in a cloud or other third-party solution on their own to get what they want, when they want it (aka “shadow IT”). What the business often lacks, however, is a full understanding of the many downstream implications that come with speeding the roll out of new services such as security assessments, integrations to legacy systems, the ongoing cost to administer the solution and take on support for it, etc. It’s true that Ops teams want to control the reliability and stability of software deployments and that can often slow things down, but they do this because that’s what they are paid to do: to deliver stable, reliable, and high-performing services.

This conflict between the business’ desire for speed and Ops’ charter for security, reliability, and performance, has been brewing for several years and it’s time for some introspection and a few tips on how we can make things better. Ultimately, progress is all about culture, and that’s why defining DevOps with a purpose must include focusing on your organization’s IT culture.

 

Tip #1: Recognize the cultural challenges between the business, Dev and Ops

Because Development sits closer to the business in the IT value chain, there’s usually less conflict between the business’ desire to move fast and Development. That isn’t to say Dev managers aren’t sometimes frustrated by how fast the business wants things given the resources available to them, but they usually are motivated and compensated to move in sync with how fast the business wants to move.

The first tip is to take a step back and benchmark your current culture for both Dev and Ops, because only then will you be able to understand how DevOps may impact those cultures. With Dev, ask yourself, for each line of business “Are we still doing waterfall software development with one release every year or eighteen months, or have we adopted agile methodologies that enable us to deliver multiple releases throughout the year?” Based on your answers, then ask, “Is that in line with what the business wants today? Is that inline with what we think makes the most sense or are there opportunities we could bring to the business if we adopted agile more broadly?” Finally, “what would be the value to the business and cultural impact of adopting DevOps principles for continuous integration of new software development?”

Next, ask yourself, “How focused is our Ops team on stability, reliability and clamping down on change vs. accepting (and even desiring) change? And how does this vary by line of business or application area?” If you have a risk-averse Ops culture that’s supporting a fragile infrastructure or many legacy technologies, then you may need to tackle some of those challenges first in order to help your DevOps culture evolve. I also suggest looking at your Ops culture through a compliance lens. Ask yourself, “Are we in a heavily-regulated industry that has created its own cultural challenges in terms of change approvals and auditing that we need to take into account?” Finally, what would be the value to the business and the cultural impact of adopting DevOps principles for continuous delivery of new releases?”

One of the hallmarks of DevOps is the ability to develop smaller pieces of incremental functionality, deploy them faster (perhaps even a few times a day), and quickly roll things back if there’s an issue. This is a big leap for many IT cultures on both the Dev and Ops side so understanding where your culture is today is a critical first step.

 

Tip #2: Think about how you view work

Do you view work as tasks done by individual teams or in an end-to-end fashion (e.g. from raw materials to finished goods and customer delivery)? DevOps transformation requires focusing on work in an end-to-end manner. Your goal should be to identify where your IT organization’s bottlenecks are, with the intent of reducing pressure at those bottlenecks to increase end-to-end flow and, therefore, output that benefits the business.

In The Phoenix Project, the bestselling IT novel (yes, indeed an IT novel) co-authored by Gene Kim, Kevin Behr and George Spafford, the character of Brent Geller in the Ops team is an obvious bottleneck. He understands his company’s IT environment better than anyone and is consumed with both solving high-profile outages and deploying new application releases, the result is that he’s constantly putting out fires and unable to get new software released that’s desperately needed by the business. The book offers a wealth of perspective on how to embrace DevOps principles and what that means for your culture. I highly recommend it. You will likely see quite a bit of your organization in the story, I know it matched my experiences in IT very well.

Back to Top #2, ask yourself, “Is my organization optimized around end-to-end flow or are we optimizing just at the individual workstations?” If it’s the latter, then you are probably stacking up work somewhere at one or more bottlenecks and should address those first. For example, a development project isn’t done until the code is tested and deployed. So incentivizing developers to finish code without knowing when and how it will get deployed is just going to create more work in process stacking up in your IT factory. It’s just like on an automotive line where you might see cars with shiny paint, brand new tires and cutting edge technical design, but if the cars have no seats they simply won’t be shippable.

You might also be saddled with a lot ‘technical debt’ – where dollars and resources are tied up maintaining existing applications and infrastructure. It may be necessary for you to focus at first on reducing technical debt in order to reduce those as bottlenecks on your support and deployment teams. Ask yourself, “What technical debt do we have and who can be charged with prioritizing and reducing it?” Treat it just like credit cards and look to reduce the ‘high interest’ debt first.

 

Tip #3: Incentivize your people

Don’t assume people will change for the “greater good” – proactively incentivize them to do so.

If you want to change your culture and create one that’s more agile – if you want to optimize for flow versus workstations then you should incentivize to initiate the cultural change you seek. Old habits are hard to break even when we want to do things differently. As executives we have the tendency to appeal to a higher business purpose and assume the folks contributing on the front lines share the same passion for the company’s success. Often they intellectually understand these goals, but you should seek to make the change relevant to how it will improve their jobs as developers, QA team members, sys admins, and architects not just how it impacts C-level objectives.

For example, Ops folks usually take particular pride in their indispensability, but to be indispensable they have to compromise family time, working late and on weekends to push out a big high-risk deployment. They may crave the attention and recognition for such ‘extra effort’, but at the same time they are putting more pressure on themselves and more risk on the business.

As mentioned previously, the move toward a more agile IT means deploying with far greater frequency, and that can actually help you operate with less stress and risk. Errors and fixes are likely smaller in effort, and you will be able to roll back hick-ups much faster. Sure your folks may need to stay late occasionally, but they can do their work more often during typical working hours, reclaim their weekends and have significantly less stress and pressure than via traditional large-scale rollouts where expectations are high and so is the risk of failure, re-work, and long days. Plus, with DevOps you are actually delivering on business requirements faster – instead of deploying what was needed months ago.

 

Tip #4: Make it personal: everyone is different.

Related to Tip #3, realizing that everyone is inherently different is critical to incentivizing and empowering your IT team to transform. In most IT organizations, the responsibility is on the first line manager to know their people, understand what makes them tick and channel the right talents and motivations. Some may be seeing a move to DevOps as the ideal resume enhancement opportunity. Some may feel intimidation because they are innately risk averse. Still others may not be sure what to think especially if you have one team doing DevOps while they remain on a separate team doing things “the old way”. What’s important for managers is not to stop at how to make DevOps relevant for a role, but to consider the specific individual. Ask yourself, “How will each of my team members react to this cultural change, and how can I tap into their natural motivations to make them enablers of the change not resistors?”

From what I’ve seen, there also is a generational aspect to DevOps cultural transformation. Younger employees who don’t know a world without mobile phones and grew up on social media are likely more comfortable with agile, small release deployments. Those with more experience are also more likely to think about the reasons why DevOps ‘can’t work’ and it will be key for first-line managers to engage them, listen to their concerns, and work to address them in a way that benefits the organization and realizes the wisdom of their experience.

The bottom line for Tip #4 is that if you make it personal you will build trust and that will help you to drive transformation in your organization faster. Perhaps even consider establishing a ‘Take a Developer or Ops Team Member to Lunch Day’. By working to understand each other’s worlds, challenges, pressures and goals you can unleash a swifter cultural change within your organization.

 

Matthew Selheimer
Chief Technical Evangelist and SVP of Marketing

Unless you’ve been living under the proverbial rock these days you have no doubt heard about DevOps and the groundswell building around it. Today DevOps is being defined, promoted, touted and discussed as the next game changer in IT. Rather than argue about the best way to define DevOps in a general sense, this article will focus on something hopefully more useful – how do you determine what DevOps means for your organization?

A common goal cited for DevOps is to enable faster release and deployment cycles by taking advantage of agile development methodologies; improved collaboration between business stakeholders, application development and operations teams; and automation tools. Beyond these elements, DevOps also requires a cultural acceptance of the need to focus on the flow of work across teams vs. simply optimizing individual teams and their specific units of work.

By contrast, defining DevOps with a purpose means ensuring how you do DevOps is grounded in the realities of your specific organization. There is no “one-size-fits-all” DevOps, no matter what pundits, consultants, and vendors might tell you. Start by actually taking a step back. Ensure you are clearly taking into account your industry, your applications, your culture, and your people when developing your DevOps strategy; then apply DevOps principles against that foundation. Because every organization’s purpose will be different, your way of doing DevOps will not be the same as anyone else, even among your industry peers.

There are, however, some general guidelines you can start with and then apply them to your specific situation. For example, I recommend analyzing your business goals and application portfolio using the pace-layering method, which focuses on three categories of applications – systems of record, systems of differentiation, and systems of innovation.

Systems of record are usually quite stable with infrequent updates. This is often due to regulatory requirements and internal policies for these systems, and they tend to have a high degree of consistency within an industry and even across industries. Systems of record are often good candidates for waterfall development methodologies and longer release timelines. Your general ledger or payroll systems are good examples of systems of record.

Systems of differentiation are those applications that are likely common to your industry but where you have an opportunity to differentiate your company from the competition based on functionality and the pace with which you update them. For example, if you are a financial services firm that provides retirement investment programs, you might make changes to your customer portal and how clients select new investment options on a quarterly or monthly basis in order to enhance customer experience and remain competitive and differentiate. If you are an airline, this might be your crew scheduling application, which ensures you have the right flight crews available when needed and can improve your on-time departure percentages to differentiate from the competition. In healthcare, your systems of differentiation might be the applications that health professionals use to store patient information and correlate test results to aid in diagnosis and treatment. In manufacturing it may be your supply chain management, process control, or shop floor applications.

Systems of innovation are those truly groundbreaking applications that help you create new markets and revenue streams. In financial services, that might be a new kind of application that gives day traders a market advantage. For an airline that might be a whole new way of providing inflight entertainment, such as when Wifi and inflight entertainment first became available to passengers. In healthcare, it might be a new customer portal that promotes wellness services and reduces office co-pays by uploading fitness tracker data. For a retailer, it might be applications that support a new type of customer loyalty program, such as when Amazon introduced their Amazon Prime service a few years ago.

Both systems of differentiation and systems of innovation are excellent candidates for agile development and DevOps principles. This is because they require high degrees of collaboration between business stakeholders, developers, and operations personnel to ensure the applications are in line with requirements, developed and tested quickly, and then made available in the market to drive differentiation and innovation. They are excellent candidates for the continuous delivery concept in DevOps where small releases are deployed quickly (and can be rolled back just as quickly if there are issues). In this manner, a large number of small deployments moves the competitive needle faster and provides greater value to customers over traditional waterfall development and disconnected development and operations practices.

Defining DevOps with a purpose starts with an understanding of your industry and your applications. In the next article in this series, we’ll turn to the topic of organizational culture and its impact on DevOps transformation.

Matthew Selheimer
Chief Technical Evangelist and SVP of Marketing

 

Does your DevOps tool chain look like the picture below with lots of disconnected tools and different team members having to bridge the gaps between them?

Disconnected_DevOps_ToolChain

If you are like most DevOps early adopters, this is probably the case. And that’s been accepted as okay because each of these tools was designed for a different group in IT to help them get their jobs done.

But the real value and benefit of DevOps is the ability to increase flow through the system from initial business requirements all the way to production deployment.

Aligned_DevOps_ToolChain

Our vision is for an aligned DevOps tool chain with “globally interesting” information shared across those tool, team, and process silos and a bi-directional feedback loop.

Because even though we tend to think of flow like a river that moves in one direction, the ability for information to flow upstream and downstream is really what’s needed to maximize our ability to help the business respond faster to opportunities and threats. And that’s the foundation for what we call the ability to deliver agility with stability.

Matt Selheimer
SVP, Marketing

How facilitated collaboration enables continuous delivery for successful DevOps

by John Balena

So much has been written lately about the challenge of improving IT agility in the enterprise. The best sources of insight on why this challenge is so difficult are the CIOs, application owners, ecommerce and release engineering executives, and VPs of I&O, grappling to change their organizations right now.

At a conference I attended recently, I met two Fortune 100 IT executives from the same company: one the head of development and the other operations. Their story is emblematic of just how hard this is in the real world. As interesting background, both the development and operations leaders were childhood best friends, participated in each others’ weddings, and spend time together socially on an almost weekly basis – but by their own admission, even they couldn’t get effective collaboration and communication to work between their two organizations.

The lesson learned from this example is that the DevOps collaboration and communication challenge cannot be solved by sheer will, desire or executive fiat. Instead, you must breakdown the barriers that inhibit collaborative behavior and facilitate new ways of communicating and working together. The old standbys of email, instant messaging, SharePoint sites, and conference calls don’t cut it.

The challenge of two opposing forces: Dev and Ops

Imagine yourself helping your children put together a new jigsaw puzzle. Each time you turn your attention to a specific piece, the kids reorganize what you have already completed and they add new pieces, but in the wrong places. For sure, three pairs of hands can be better than one, but they can also create chaos, confusion and significantly elongate the completion of the puzzle.

The collaboration challenge in the DevOps movement is grounded in this analogy. How do you get multiple people working together across teams, locations, and time zones to build and get things deployed faster without chaos, confusion, and delay? How do you get these teams to speak the same language and collaborate together with a singular purpose when their priorities and motivations are so different?

Faced with this challenge, it’s easy to see why many organizations have stayed in their comfort zone of ‘waterfall’ releases and keep the number of releases per year small. The issue is this method isn’t meeting the demands of the business, the market and competition. As a result, more and more business leaders are going around their IT organizations. Options like public cloud, SaaS, open source tools, skunk works IT and outsourcing are making it easier for them to control IT decisions and implementations within the business unit or department itself.

So let’s dive deeper to understand the two forces at the heart of the issue: development (focused on the creation or modification of applications to support a business need) and operations (delivering defined services with stability and quality). It appears these forces are working in opposition, but both groups are focused on doing what leadership asks of them.

Developers tend to think their job is done once the application is rapidly created, but it’s not because no actual value has been delivered to the business until the application is operational and in production. Operations is severely disciplined when services experience performance and availability issues and they have come to learn that uncontrolled change is the biggest cause of these issues.  As a result, operations teams often believe their number one job is to minimize change to better control impact on performance and availability. This causes operations to be a barrier to the rapid change required to give the business the speed and agility it needs.

Critical to enabling DevOps is an explicit recognition of this situation and the ability to link discrete phases of the application development and operations lifecycle to enabling ‘fast, continuous flow’ – from defining requirements, to architecting the design, to building the functionality, to testing the application, to deploying the application to both pre-production and production environments and to managing all the underlying infrastructure change required for the application to operate efficiently and effectively in all environments.

Why current approaches don’t work

There are several challenges in achieving this ideal.

  1. Developers hate to document (can you blame them?), and, when they do, their communication is in a context they understand, not necessarily empathetic with the language that operations speaks. The view from operations is that the documentation they receive is incomplete, confusing, and/or misleading. With the rapid pace of development, this challenge is getting worse with documentation becoming more and more transient as developers “reconfigure the puzzle” on the fly.
  2.  Today’s operations teams typically take responsibility for production environments and their stability. That means there is usually a group wedged in between the two – the quality assurance (QA) team. QA’s job is to validate the application works as expected and often they require multiple environments for each release. This group is typically juggling multiple releases and is, in essence, working on and reconfiguring multiple puzzles at the same time. The challenge of keeping QA environments in sync with both in-process releases and production can be maddening (just talk to any QA leader and they’ll tell you first-hand). The documentation coming from development is inadequate, and the documentation coming from production is often no better, since most operations teams store much of their most current information about configurations in personal files or simply in their brains.
  3. The ad hoc handoffs from development to operations and QA take time away from development’s primary mission: creating applications that advance the business. Some suggest developers should operate and support what they code in order to reduce handoffs and the risk of information distortion or loss. A fundamental risk with this approach is opportunity cost. Does a developer really understand the latest and greatest technology available for infrastructure and how to flex and scale those technologies to fit the organization need? Do you even want them to or would you rather they be coding instead?
  4. Others have suggested that Operations move upstream and own all environments from dev to QA to production, and treat configuration and deployment scripts as code just like a developer would. This may sound like a good option, but it can create a constraint on your operations team and cause valuable intelligence to become hidden in scripts. A particular application deployment could have one or more software packages required and potentially hundreds of different configuration settings. If all that information is embedded in a script, how will other team members know this if they go to make a change to the underlying infrastructure to apply a security patch, upgrade an OS version, or any of the other changes made in IT every day?

Real DevOps transformation doesn’t mean that you give everyone new jobs, instead, it’s about creating an environment where teams can collaborate together with a common language and where information is immediately available at the point of execution and in a context unique to each team.

A better way forward?

In The Phoenix Project, written by DevOps thought leaders Kevin Behr, Gene Kim and George Spafford, the authors promote the need for optimizing the entire end-to-end flow of the discrete processes required for application delivery – applying principles as they achieved agility in the discrete manufacturing process.

Manufacturing in the 1980s resembled IT operations today, employing rigid silos of people and automation for efficiency and low cost, but this became a huge barrier to the agility, stability and quality the market demanded. They learned if you optimize each workstation in a plant, you don’t optimize for the end-to-end process. They also learned that if quality and compliance processes were ancillary to the manufacturing process, it slowed things down, drove up costs and actually decreased quality and compliance.

Successful manufacturers brought a broader view and optimized end-to-end flow rather than operate in a particular silo. They also brought quality and compliance processes inline with the manufacturing process. By addressing quality and compliance early in the cycle and at the moment that an issue occurred, cycle times decreased significantly, costs plummeted and quality and compliance increased dramatically.

These same principles can be applied to IT resulting in:

  • faster time to market;
  • greater ability to react to competitive pressures;
  • deployments with fewer errors
  • continuous compliance with policies; and
  • improved productivity.

DevOps can best be realized when IT operates in a social, collaborative environment that ensures all groups are working with a visual model in their context with the necessary information from downstream and upstream teams, as well as in collaborating with relevant experts at the moment when clarifications are needed or issues arise.

To merge creation with operation, the core idea behind DevOps, requires a cultural change and new methods in which cross-functional teams are in a state of continuous collaboration as they deliver their piece of the puzzle at the right time, in the right way, in-context with the other teams in other silos. Operating something that never existed before requires documentation so that operations teams have the information they need to manage change with stability and quality.

With more modern collaboration methods, self-documenting capabilities are now possible as development, release and operations do their respective jobs, including visualization of documentation with analytics and with the perspective and context each team needs to effectively do their job downstream. These types of capabilities will transform organizational culture and break down barriers to collaboration that impede agility, stability and quality.

Is this simply nirvana, and unachievable in the real world?  No. Manufacturing achieved the same results by applying these principles; the fundamental point being made in The Phoenix Project.

The goal is not to write code or to keep infrastructure up and running, or to release new applications or to maintain quality and compliance. Instead, the goal is for IT to take the discrete silos of people, tools, information and automation, and create a continuous delivery environment through facilitated collaboration and communication. This will drive the cultural and operational transformation necessary to enable IT to respond to business needs with agility while ensuring operational stability and quality.

John Balena is senior vice president of worldwide sales and services at Houston-based ITinvolve. He formerly served as the line of business leader for the DevOps Line of Business at one of the “Big 4” IT management software vendors.

 

 

Harnessing the power of collaboration to enable a DevOps-driven IT organization.
by Cass Bishop

I love tech tools.  During my career I have worked for and consulted with many companies, and every time I begin a project I immediately look for tools or frameworks to help me complete things faster.  For a guy obsessed with new tech tools, now is a great time to be in IT.  Git, JIRA, Jenkins, Selenium, Puppet, Chef, Bladelogic, uDeploy, Docker, and Wily (just to name a few great tools) are providing IT with a big box hardware store full of tools designed to help solve technical problems. These tools are variously pitched, sold, praised and cursed during DevOps initiatives – primarily because they are good enough for most needs but still leave some critical gaps.

With such a list, you can try to check off all the items listed in one of those “X things you need for DevOps” blogs that are published almost daily. “Continuous integration…check.  Automated testing…check. Continuous delivery…check. Automated configuration management…check.  Application Monitoring…check.  So now can I say DevOps…check? ” You probably can’t check that box and I would argue you never will with the above list of tools because, unless your IT department fits in one room and goes for beers together every Thursday, you are missing the most important concept of DevOps: the need for continuous collaboration about your applications in all of their states from development to retirement.

Most organizations I have worked with aren’t even close to this level of collaboration across development and operations. They are often dispersed across the globe working in different chains of command with different goals. How does a developer in Singapore collaborate with an operations team in Atlanta? Shouldn’t the incredible number of tools in our arsenal be enough to fix this? “We’ll give the operations team accounts in JIRA, Jenkins and Selenium then give the developer access to Puppet, Wily, Splunk and the production VMs. They can send each other links and paths to information in each of the different tools and they can collaborate in email, IM, conference calls, and a SharePoint site.” Sounds ok until you realize that each of the email threads or chats, which is filled with useful information, gets buried in employee outlook folders or chat logs. Also, when was the last time you heard someone ask to attend yet another conference call or use yet another SharePoint site?

“Maybe we should have them save the chat logs and email threads in the Operations Wiki, the Development Confluence site, or that new SharePoint site?” With these kinds of approaches, you can find the threads based on string-based searches, but anyone reading them has no context about how all of the data points in the discussion relate to actual applications, servers or any other IT asset. In addition to the lack of context, your IT personnel now spend their days hunting for a needle in an ever-growing haystack of data generated by those amazing tools.

What if, as the now-familiar adage goes, there was an app for that.  An application designed  to help bring all this disconnected data together, make sense out of it, display it visually, and had social collaboration built in.

With this kind of application, when your middleware admin needs to discuss a problem with the UAT messaging engine, she can now do so in context with the other experts in your organization. Her conversation is saved and directly related to the messaging engine. If the conversation leads to fixing an issue, the lesson learned can be turned into a knowledge entry specific to messaging engines. Now any IT employee can quickly find this knowledge and see who contributed to it the next time there is a messaging engine issue.

When developers want to collaborate with Sys Admins about higher memory requirements for their application due to a new feature, they can pull them into a discussion in the feature’s individual activity stream. The admins are alerted that they have been added to the conversation by their mobile devices and they contribute to the activity stream and can even add other participants , like the operations manager, so he can weigh in on the need for devoting more memory to the correct VMs.

No tool or application can drop in a DevOps culture for your organization – that must come from within, but there are now applications available that provide the data federation, visualization, and contextual collaboration capabilities necessary to help enable cultural change so you can create your own DevOps movement in your organization.

 


Cass Bishop is Director of Consulting Services at Houston-based ITinvolve (www.itinvolve.com). He has been a middleware, automation, and DevOps practitioner for nearly twenty years and has worked on projects in some of the largest IT organizations in the US.

 

Because of the roots of DevOps within the Agile Software Development movement, there is a strong theme of “individuals and interactions over processes and tools” within the DevOps community (see agilemanifesto.org for more). To a significant extent, this attitude has been taken to mean tools are not really necessary and everyone can or should roll their own approach so long as they follow DevOps principles (for a good DevOps primer, check out the Wikipedia page here and the dev2ops blog here).

More recently, the DevOps community has begun to embrace a variety of automation and scripting tools, notably companies like Puppet Labs and Chef, because DevOps practitioners have recognized that doing everything by hand is both tedious and highly prone to error. That has led to a new term “infrastructure as code” (Dmitriy Samovskiy has a quick primer on his blog here.) But beyond automation (and, to a lesser extent, monitoring tools), the DevOps community hasn’t fully embraced the need for other types of tools to aid in DevOps work.

What’s more, despite this evolution around the need for automation tools, and the recognition that individuals and interactions are key, there are still a lot of walls in most organizations that impede the DevOps vision for continuous delivery of new applications. Quoting from dev2ops:

Development-centric folks tend to come from a mindset where change is the thing that they are paid to accomplish. The business depends on them to respond to changing needs. Because of this relationship, they are often incentivized to create as much change as possible.

Operations folks tend to come from a mindset where change is the enemy.  The business depends on them to keep the lights on and deliver the services that make the business money today. Operations is motivated to resist change as it undermines stability and reliability.

Both development and operations fundamentally see the world, and their respective roles in it, differently. Each believe [sic] that they are doing the right thing for the business… and in isolation they are both correct!

Adding to the Wall of Confusion is the all too common mismatch in development and operations tooling. Take a look at the popular tools that developers request and use on a daily basis. Then take a look at the popular tools that systems administrators request and use on a daily basis. With a few notable exceptions, like bug trackers and maybe SCM, it’s doubtful you’ll see much interest in using each others [sic] tools or significant integration between them. Even if there is some overlap in types of tools, often the implementations will be different in each group.

Nowhere is the Wall of Confusion more obvious than when it comes time for application changes to be pushed from development [to] operations. Some organizations will call it a “release” some call it a “deployment”, but one thing they can all agree on is that trouble is likely to ensue. 

Again, despite the recognition that some level of automation tooling for DevOps is needed, and despite the fact that individuals and interactions are seen as critical, the DevOps community hasn’t really developed a strong opinion on exactly how Dev and Ops should work together and precisely where they should do so.

Julie Craig of Enterprise Management Associates, describes the need pretty well in a recent whitepaper:

“…their tools must interoperate at some level to provide a foundation for collaborative support and Continuous Delivery.”

“DevOps-focused toolsets provide a common language that bridges skills, technical language, and personalities. In other words, they enable diverse personnel to seamlessly collaborate.”

“…tools must interoperate to support seamless collaboration across stages…data must also be shared as software moves from one stage to the next.”

Now, it’s all well and good to talk about the need for integration across tools and more collaboration, but where and how should Dev and Ops functions actually go to get work done together. Where and how do they best exchange information and knowledge about releases that are in process, engage with business stakeholders to validate business requirements, notify stakeholders of changes to functional specs and operational requirements? Where do they go to have an accurate understanding of the full stack required for deployment, to understand disparities and drift between pre-production and production environments, and collaborate together on deployment plans and potential risks that may exist and should be mitigated?

These are just a few examples of the DevOps work that must take place to enable continuous delivery, but unfortunately most DevOps practitioners are trying to use outmoded approaches or rejecting tools as viable to addressing these needs. For example, teams have tried using Wikis and SharePoint sites, “It’s on the Wiki” is an all too common refrain. Or they have fallen back on endless meetings, email chains, and real-time IMs that are limited to only select participants and with knowledge that is shared and then lost in an inbox or disappears when the IM is closed. And most DevOps practitioners will tell you they have rejected the CMDB and service support change management tools as well, because they a) don’t trust the data in their company’s CMDB (or perhaps multiple CMDBs) and b) believe traditional ITIl change tools are far too process heavy and actually work against the goals of agile development and delivery.

What we need instead is a place where Dev and Ops teams can actually work together and collaborate with the business – all the way from requirements planning to post-deployment issue resolution. This new workspace shouldn’t replace the tools that each group is already using and it should work with existing ITIL tools too. Instead, its purpose is to provide a unifying layer that brings together the relevant information and knowledge across the DevOps lifecycle, and employs modern social collaboration techniques to notify and engage individuals based on what they are responsible for and have opted into caring about. What’s more, it should leverage information from CMDBs and discovery tools along with a range of other information sources, and provide a mechanism for peer review to validate this information continuously, fill in gaps, and correct bad information too – so that Dev and Ops practitioners have a place they can go to access all the information they need to do their daily work efficiently and make accurate and timely decisions that move the business forward.

With a new DevOps workspace like this, we can finally overcome the limitations of traditional IT management tools, outmoded collaboration practices, and embrace tools that are built to support DevOps practitioners and their interactions. It’s what we call an IT agility application, and it’s what we offer at ITinvolve. You can read more about how it works in this in-depth use case document.

Matt Selheimer
VP, Marketing

Bill-Nye

I recently came across this quote and thought it very apropos to the situation in today’s complex IT organizations. Whether you are talking about server, storage, and network admins; developers; QA teams; security managers; and the many other experts in a typical IT organization, the fact is everyone in IT has specialized knowledge and a unique perspective on what they are responsible for.

It’s all too easy to get caught up in these individual perspectives and miss out on the big picture. But worse than that, because there are so many items associated with delivering a given application or service, and many of these items like a policy or unique setting an individual expert is not aware of, the best intended actions can often produce unexpectedly bad outcomes. Our own experiences and a quick Google search reveals that it is still all too common that outages are caused by “human error” and not an equipment failure or code issue.

George Spafford at Gartner has state the problem this way, “It is becoming impossible for any person or group [within IT] to completely understand how everything integrates together.” Because we don’t know what we don’t know, we can be lulled into a false sense of security as the bad outcomes all too clearly illustrate.

In response, a lot of IT organizations have tried to attack the problem with email, meetings and formalized change processes. This has helped many companies identify and minimize risks to a certain degree, but they have exchanged this benefit for a much slower change rate and over-involving too many personnel in change management.

A recently published metric from industry consulting firm Pink Elephant found that the average time from creation of a change request to its execution was 31 days! Whether the change is in response to a business need or is applying a patch or upgrade to make infrastructure better performing and resilient, I think we can all agree that a month is far too long. And a month is just the average! I am sure that complex changes with many change items greatly exceed a month in many IT organizations today. We need to do better as an industry – and that goes not just for practitioners but vendors and consultants too.

Here’s the second part of the problem with many current approaches. Because IT operations teams are rightly concerned about the instability that change represents, they pull far too many people into change planning meetings and change advisory board (CAB) meetings who don’t really need to be there or who could have just as easily provided their input offline. I can’t tell you how many times I have heard a change process owner complain about how they send emails out to change approvers and then have to hunt them down in person to get them to login to the change system and respond. And for their part, those approvers often complain that they get so many emails from the change system they can’t distinguish which are important and just end up ignoring them all.

So this brings me back to Bill Nye and his astute observation that we can learn something from everyone we meet. Let’s accept the fact that each of us doesn’t know everything we need to know to effectively manage today’s complex IT environments, despite the fact that we may indeed be experts in a particular area. It is only by capturing our collective knowledge and making it available to everyone that we can have a complete understanding of dependencies and risks. By using a modern approach like ITinvolve that allows IT knowledge workers to follow what they are responsible for or have an interest in, we can leverage the knowledge of others AND identify exactly who the right experts are and proactively engage them in a virtual collaboration to assess risk.

The result is that risks can be assessed more accurately but also more quickly, and without pulling people unnecessarily off of whatever else they are working on too. This assessment can then be provided proactively to the CAB and CAB members can approve or reject offline from meetings at a time of their convenience. If all CAB members approve, the change doesn’t even need to get discussed in the formal CAB meeting and can move straight to execution. This then enables IT to focus CAB meetings on the really important and high-risk changes that everyone hasn’t approved.

To get there, the first step is simply to recognize and appreciate that you can learn a lot from others by sharing what you know and having everyone do the same. We often here statements from our customers like “I’ve learned more about our environment using ITinvolve in the last three weeks than in the last five years I’ve worked here.” This is the reality, no matter how much of an expert we are, our knowledge of today’s complex IT landscape is limited. It’s only by working together and sharing what we know that we can deliver on our mission of helping IT become more agile while minimizing risk.

Matt Selheimer
VP of Marketing