Contact ITInvolve
x 


ITinvolve Blog

The Aligned DevOps Tool Chain

May 20th, 2014

Does your DevOps tool chain look like the picture below with lots of disconnected tools and different team members having to bridge the gaps between them?

Disconnected_DevOps_ToolChain

If you are like most DevOps early adopters, this is probably the case. And that’s been accepted as okay because each of these tools was designed for a different group in IT to help them get their jobs done.

But the real value and benefit of DevOps is the ability to increase flow through the system from initial business requirements all the way to production deployment.

Aligned_DevOps_ToolChain

Our vision is for an aligned DevOps tool chain with “globally interesting” information shared across those tool, team, and process silos and a bi-directional feedback loop.

Because even though we tend to think of flow like a river that moves in one direction, the ability for information to flow upstream and downstream is really what’s needed to maximize our ability to help the business respond faster to opportunities and threats. And that’s the foundation for what we call the ability to deliver agility with stability.

Matt Selheimer
SVP, Marketing

Be Sociable, Share!

    Improving Configuration Management: Getting Control Over Drift

    April 28th, 2014

    Configuration Drift poses a number of challenges to your IT organization and your business; for example the risk of non-compliance with security policies, performance and availability issues, and the failed deployment of new application releases.

    To address drift most IT organizations have now employed some combination of scripts (or automation tools), a configuration management database (CMDB), and have defined a software configuration management approval process.  Despite these efforts, we find that configuration drift still occurs a lot in large enterprises.

    Why is this the case?

    First, if you are like most IT organizations, you probably follow the 80/20 rule with your administrators focusing 80% of their time on the configuration elements they consider most important to their roles and that leaves quite a gap where drift can still occur.  What’s more, if you are using scripts and automation tools to enforce configurations, it’s important to keep in mind these approaches rely on an explicit formula – meaning you have to specify exactly which configuration settings to enforce and when.  This leaves things pretty much wide open that settings you haven’t gotten around to specifying can be changed and additional software installed that might cause problems.

    For example, let’s say that your security policy states that a certain range of TCP/IP ports should not be open on a certain class of servers.  You might reinforce this policy with an automation script that routinely verifies the port status and closes any ports in the range that may have been opened through some other means.  Sounds like you’ve got things covered, right?  Well, what if that port was opened as part of a change process to deploy a new application to one of those servers, and what if those working on the project knew nothing about the TCP/IP port enforcement script.  They deploy the new application, test it to make sure all is working well, and then send out the email to the user community letting them know the new application has been launched – a great day for IT enabling the business! Then, overnight (or the next time the script is scheduled to run), the port is closed.  Users come into work the next day and are unable to access the new application, calls start coming into your service desk, an all hands on deck meeting is hastily assembled, and, after some period of time, the port closure is identified as the issue and the port is reopened – only to have it closed again the next time the script runs – until finally someone realizes this script is the underlying cause (because probably the person who wrote it is no longer there and they didn’t document it other via a notation in an audit report that a script was the enforcement mechanism selected.)

    Consider another example, where we have an application that has very low utilization most days except for spikes of activity at the end of each month (such as an application that accepts orders from a dealer network).  Let’s say an engineer is looking for some available equipment to install a new application on and identifies the same server running the dealer order system as a good candidate because of its strong specs and low average utilization.  He installs the new app and everything is working great until the end of the month when the dealer network comes alive with hundreds of orders every hour.  Now because we have two applications vying for the same physical components, we start to see performance issues and scramble move the new application to other hardware, taking it offline in the process, and putting it on an available server with lesser specs causing it to run slower than before irritating the user community even further.  In this scenario, your automation scripts would have done nothing to prevent this drift from the expected configuration (i.e. the dealer order system is the only application running on this box), because they would have no awareness that the new application even existed.  What’s more, automation could have actually made things worse if you had employed a strategy to periodically wipe and rebuild your machines (these are referred to as “phoenix servers” and it’s another strategy some have tried to reduce drift) – because, in this case, if you had followed such an approach your new app would have been erased from your data center entirely at the new rebuild.

    So how can you get control over drift and avoid these sorts of issues?

    First, the scripts and automations you have running need to be documented including what they do, when they run, and who is responsible for them.  With this information, you can make people proactively aware of any script and configuration conflicts as part of your change and release management process.  This will help you avoid the first example where the TCP/IP port was unexpectedly closed, because your team is more aware of and can account for the fact that there needs to be an exception to your TCP/IP port range – not only updating the script to reflect this but also documenting the exception proactively for your auditors.

    Second, with accurate documentation about how your environment and key applications are configured, you can better understand why that dealer order system was running on equipment all by itself (because the tribal knowledge about the end of month peak loads was documented), and you can then also compare the current state against the expected state to identify drift issues and take action to address them as appropriate.  For example, you might trigger an incident and assign ownership to the relevant administrators who own the automations for that equipment and/or applications.

    ITinvolve’s Drift Manager can help you implement both capabilities and more.  Drift Manager helps you document scripts and automations as well as “gold standard” configuration settings leveraging information you already have (via importing or federation) while also capturing the undocumented tribal knowledge and validating it through social collaboration methods and peer review.  Drift Manager also helps you compare the current vs. expected state in real-time and then facilitates raising drift incidents when required.  What’s more, ITinvolve helps you “broadcast” upcoming configuration changes so all relevant experts are included in your configuration management process and can fully assess the risk and implications to avoid the kinds of issues discussed above.  Finally, it ensures your teams are aware of the policies that govern your resources so that, as configuration changes are being considered, the potential policy impacts are considered at the same time.

    No matter your approach, configuration drift will happen.  The question is, do you know about it when it happens and can you get the right experts quickly engaged to address it without causing other issues?

    Matt Selheimer
    SVP, Marketing

    Be Sociable, Share!

      Merging Creation With Operations

      April 21st, 2014

      How facilitated collaboration enables continuous delivery for successful DevOps

      by John Balena

      So much has been written lately about the challenge of improving IT agility in the enterprise. The best sources of insight on why this challenge is so difficult are the CIOs, application owners, ecommerce and release engineering executives, and VPs of I&O, grappling to change their organizations right now.

      At a conference I attended recently, I met two Fortune 100 IT executives from the same company: one the head of development and the other operations. Their story is emblematic of just how hard this is in the real world. As interesting background, both the development and operations leaders were childhood best friends, participated in each others’ weddings, and spend time together socially on an almost weekly basis – but by their own admission, even they couldn’t get effective collaboration and communication to work between their two organizations.

      The lesson learned from this example is that the DevOps collaboration and communication challenge cannot be solved by sheer will, desire or executive fiat. Instead, you must breakdown the barriers that inhibit collaborative behavior and facilitate new ways of communicating and working together. The old standbys of email, instant messaging, SharePoint sites, and conference calls don’t cut it.

      The challenge of two opposing forces: Dev and Ops

      Imagine yourself helping your children put together a new jigsaw puzzle. Each time you turn your attention to a specific piece, the kids reorganize what you have already completed and they add new pieces, but in the wrong places. For sure, three pairs of hands can be better than one, but they can also create chaos, confusion and significantly elongate the completion of the puzzle.

      The collaboration challenge in the DevOps movement is grounded in this analogy. How do you get multiple people working together across teams, locations, and time zones to build and get things deployed faster without chaos, confusion, and delay? How do you get these teams to speak the same language and collaborate together with a singular purpose when their priorities and motivations are so different?

      Faced with this challenge, it’s easy to see why many organizations have stayed in their comfort zone of ‘waterfall’ releases and keep the number of releases per year small. The issue is this method isn’t meeting the demands of the business, the market and competition. As a result, more and more business leaders are going around their IT organizations. Options like public cloud, SaaS, open source tools, skunk works IT and outsourcing are making it easier for them to control IT decisions and implementations within the business unit or department itself.

      So let’s dive deeper to understand the two forces at the heart of the issue: development (focused on the creation or modification of applications to support a business need) and operations (delivering defined services with stability and quality). It appears these forces are working in opposition, but both groups are focused on doing what leadership asks of them.

      Developers tend to think their job is done once the application is rapidly created, but it’s not because no actual value has been delivered to the business until the application is operational and in production. Operations is severely disciplined when services experience performance and availability issues and they have come to learn that uncontrolled change is the biggest cause of these issues.  As a result, operations teams often believe their number one job is to minimize change to better control impact on performance and availability. This causes operations to be a barrier to the rapid change required to give the business the speed and agility it needs.

      Critical to enabling DevOps is an explicit recognition of this situation and the ability to link discrete phases of the application development and operations lifecycle to enabling ‘fast, continuous flow’ – from defining requirements, to architecting the design, to building the functionality, to testing the application, to deploying the application to both pre-production and production environments and to managing all the underlying infrastructure change required for the application to operate efficiently and effectively in all environments.

      Why current approaches don’t work

      There are several challenges in achieving this ideal.

      1. Developers hate to document (can you blame them?), and, when they do, their communication is in a context they understand, not necessarily empathetic with the language that operations speaks. The view from operations is that the documentation they receive is incomplete, confusing, and/or misleading. With the rapid pace of development, this challenge is getting worse with documentation becoming more and more transient as developers “reconfigure the puzzle” on the fly.
      2.  Today’s operations teams typically take responsibility for production environments and their stability. That means there is usually a group wedged in between the two – the quality assurance (QA) team. QA’s job is to validate the application works as expected and often they require multiple environments for each release. This group is typically juggling multiple releases and is, in essence, working on and reconfiguring multiple puzzles at the same time. The challenge of keeping QA environments in sync with both in-process releases and production can be maddening (just talk to any QA leader and they’ll tell you first-hand). The documentation coming from development is inadequate, and the documentation coming from production is often no better, since most operations teams store much of their most current information about configurations in personal files or simply in their brains.
      3. The ad hoc handoffs from development to operations and QA take time away from development’s primary mission: creating applications that advance the business. Some suggest developers should operate and support what they code in order to reduce handoffs and the risk of information distortion or loss. A fundamental risk with this approach is opportunity cost. Does a developer really understand the latest and greatest technology available for infrastructure and how to flex and scale those technologies to fit the organization need? Do you even want them to or would you rather they be coding instead?
      4. Others have suggested that Operations move upstream and own all environments from dev to QA to production, and treat configuration and deployment scripts as code just like a developer would. This may sound like a good option, but it can create a constraint on your operations team and cause valuable intelligence to become hidden in scripts. A particular application deployment could have one or more software packages required and potentially hundreds of different configuration settings. If all that information is embedded in a script, how will other team members know this if they go to make a change to the underlying infrastructure to apply a security patch, upgrade an OS version, or any of the other changes made in IT every day?

      Real DevOps transformation doesn’t mean that you give everyone new jobs, instead, it’s about creating an environment where teams can collaborate together with a common language and where information is immediately available at the point of execution and in a context unique to each team.

      A better way forward?

      In The Phoenix Project, written by DevOps thought leaders Kevin Behr, Gene Kim and George Spafford, the authors promote the need for optimizing the entire end-to-end flow of the discrete processes required for application delivery – applying principles as they achieved agility in the discrete manufacturing process.

      Manufacturing in the 1980s resembled IT operations today, employing rigid silos of people and automation for efficiency and low cost, but this became a huge barrier to the agility, stability and quality the market demanded. They learned if you optimize each workstation in a plant, you don’t optimize for the end-to-end process. They also learned that if quality and compliance processes were ancillary to the manufacturing process, it slowed things down, drove up costs and actually decreased quality and compliance.

      Successful manufacturers brought a broader view and optimized end-to-end flow rather than operate in a particular silo. They also brought quality and compliance processes inline with the manufacturing process. By addressing quality and compliance early in the cycle and at the moment that an issue occurred, cycle times decreased significantly, costs plummeted and quality and compliance increased dramatically.

      These same principles can be applied to IT resulting in:

      • faster time to market;
      • greater ability to react to competitive pressures;
      • deployments with fewer errors
      • continuous compliance with policies; and
      • improved productivity.

      DevOps can best be realized when IT operates in a social, collaborative environment that ensures all groups are working with a visual model in their context with the necessary information from downstream and upstream teams, as well as in collaborating with relevant experts at the moment when clarifications are needed or issues arise.

      To merge creation with operation, the core idea behind DevOps, requires a cultural change and new methods in which cross-functional teams are in a state of continuous collaboration as they deliver their piece of the puzzle at the right time, in the right way, in-context with the other teams in other silos. Operating something that never existed before requires documentation so that operations teams have the information they need to manage change with stability and quality.

      With more modern collaboration methods, self-documenting capabilities are now possible as development, release and operations do their respective jobs, including visualization of documentation with analytics and with the perspective and context each team needs to effectively do their job downstream. These types of capabilities will transform organizational culture and break down barriers to collaboration that impede agility, stability and quality.

      Is this simply nirvana, and unachievable in the real world?  No. Manufacturing achieved the same results by applying these principles; the fundamental point being made in The Phoenix Project.

      The goal is not to write code or to keep infrastructure up and running, or to release new applications or to maintain quality and compliance. Instead, the goal is for IT to take the discrete silos of people, tools, information and automation, and create a continuous delivery environment through facilitated collaboration and communication. This will drive the cultural and operational transformation necessary to enable IT to respond to business needs with agility while ensuring operational stability and quality.

      John Balena is senior vice president of worldwide sales and services at Houston-based ITinvolve. He formerly served as the line of business leader for the DevOps Line of Business at one of the “Big 4” IT management software vendors.

       

       

      Be Sociable, Share!

        Create Your Own DevOps Movement

        March 17th, 2014

        Harnessing the power of collaboration to enable a DevOps-driven IT organization.
        by Cass Bishop

        I love tech tools.  During my career I have worked for and consulted with many companies, and every time I begin a project I immediately look for tools or frameworks to help me complete things faster.  For a guy obsessed with new tech tools, now is a great time to be in IT.  Git, JIRA, Jenkins, Selenium, Puppet, Chef, Bladelogic, uDeploy, Docker, and Wily (just to name a few great tools) are providing IT with a big box hardware store full of tools designed to help solve technical problems. These tools are variously pitched, sold, praised and cursed during DevOps initiatives – primarily because they are good enough for most needs but still leave some critical gaps.

        With such a list, you can try to check off all the items listed in one of those “X things you need for DevOps” blogs that are published almost daily. “Continuous integration…check.  Automated testing…check. Continuous delivery…check. Automated configuration management…check.  Application Monitoring…check.  So now can I say DevOps…check? ” You probably can’t check that box and I would argue you never will with the above list of tools because, unless your IT department fits in one room and goes for beers together every Thursday, you are missing the most important concept of DevOps: the need for continuous collaboration about your applications in all of their states from development to retirement.

        Most organizations I have worked with aren’t even close to this level of collaboration across development and operations. They are often dispersed across the globe working in different chains of command with different goals. How does a developer in Singapore collaborate with an operations team in Atlanta? Shouldn’t the incredible number of tools in our arsenal be enough to fix this? “We’ll give the operations team accounts in JIRA, Jenkins and Selenium then give the developer access to Puppet, Wily, Splunk and the production VMs. They can send each other links and paths to information in each of the different tools and they can collaborate in email, IM, conference calls, and a SharePoint site.” Sounds ok until you realize that each of the email threads or chats, which is filled with useful information, gets buried in employee outlook folders or chat logs. Also, when was the last time you heard someone ask to attend yet another conference call or use yet another SharePoint site?

        “Maybe we should have them save the chat logs and email threads in the Operations Wiki, the Development Confluence site, or that new SharePoint site?” With these kinds of approaches, you can find the threads based on string-based searches, but anyone reading them has no context about how all of the data points in the discussion relate to actual applications, servers or any other IT asset. In addition to the lack of context, your IT personnel now spend their days hunting for a needle in an ever-growing haystack of data generated by those amazing tools.

        What if, as the now-familiar adage goes, there was an app for that.  An application designed  to help bring all this disconnected data together, make sense out of it, display it visually, and had social collaboration built in.

        With this kind of application, when your middleware admin needs to discuss a problem with the UAT messaging engine, she can now do so in context with the other experts in your organization. Her conversation is saved and directly related to the messaging engine. If the conversation leads to fixing an issue, the lesson learned can be turned into a knowledge entry specific to messaging engines. Now any IT employee can quickly find this knowledge and see who contributed to it the next time there is a messaging engine issue.

        When developers want to collaborate with Sys Admins about higher memory requirements for their application due to a new feature, they can pull them into a discussion in the feature’s individual activity stream. The admins are alerted that they have been added to the conversation by their mobile devices and they contribute to the activity stream and can even add other participants , like the operations manager, so he can weigh in on the need for devoting more memory to the correct VMs.

        No tool or application can drop in a DevOps culture for your organization – that must come from within, but there are now applications available that provide the data federation, visualization, and contextual collaboration capabilities necessary to help enable cultural change so you can create your own DevOps movement in your organization.

         


        Cass Bishop is Director of Consulting Services at Houston-based ITinvolve (www.itinvolve.com). He has been a middleware, automation, and DevOps practitioner for nearly twenty years and has worked on projects in some of the largest IT organizations in the US.

         

        Be Sociable, Share!

          What’s Your Discipline?

          March 14th, 2014

          For those of us working in IT, it’s pretty impressive the list of disciplines that have been defined over the years to manage an IT organization and the functions it’s responsible for.

          In response to this, some vendors have built different applications for every discipline, which creates or reinforces silos in processes and organizational entities. At ITinvolve, we’re interested in a different approach; an approach that is grounded in information and knowledge sharing, that reinforces collaboration across silos, and arms IT knowledge workers with the information and analysis they need to do their jobs more effectively without having to go hunting for information across applications.

          Our approach takes the elements you manage in IT every day from requirements to releases, servers, network devices, firewalls, policies, and much more and puts the information you need in context of those elements and the relationships that bind them together.

          In this way, the information you need can be accessed and managed from multiple perspectives. For example:

          • Show me all applications that are governed by our PCI policy
          • Which requirements made the cutline for the next release?
          • If I take this server offline to upgrade the OS, what will be impacted?
          • Who is responsible for this middleware component?
          • Let me see how many instances of that database version we have running production
          • What’s in our service catalog?
          • Is there a known workaround for this issue?
          • What parameters are in this automation?
          • What are the fragile settings for this application?

          This is a very small sample list of the powerful types of questions you can ask and the information freely available at your fingertips in ITinvolve.

          While ITinvolve doesn’t cover every discipline in IT, we do support quite a few of the disciplines you will find in any good size IT organization.

          Check them out:
          Disciplines We Support

          Matt Selheimer
          VP, Marketing

          Be Sociable, Share!

            Agility With Stability

            February 28th, 2014

            Earlier this week, I attended Gartner’s CIO Leadership Forum in Phoenix, Arizona. This event drew 600 CIOs from the US and Latin America as well as a few from “across the pond.” Last week, I attended CIOsynergy Atlanta which drew more than 150 CIOs, CTOs, and VPs of IT from across the Southeast US. At both events there was a strong desire and great interest in how IT organizations can achieve greater agility while ensuring the stability their businesses also demand.

            The challenge of agility with stability expressed itself in different ways depending upon the industry and culture of the IT organization. For example, in Atlanta, I spoke with the head of mobility for a major US department store who is focused on enabling greater agility in the consumer mobile experience but is challenged by the integrations required to legacy systems and PCI requirements. Another IT leader in Atlanta, working at a major hotel chain, said he felt like he had the PCI challenge under control but struggled to avoid unforeseen impacts from IT changes. One of the CIO panelists, who heads up IT for a multi-billion dollar heavy manufacturer, described her agility challenge in this way, “We need to do a much better job of documenting the spaghetti we’ve built up in IT; we need a living picture of all the relationships that tie our systems together.” It was this lack of documentation and understanding of dependencies that she felt was the critical challenge holding her back in transforming her IT organization to be more agile.

            In Phoenix at the Gartner CIO Forum, I spoke with the CIO of a large regional university. He said that he had a very entrenched culture in his IT organization and was going to follow Gartner’s recommendation for “bi-modal” IT and set up a separate team chartered with driving agile development projects while ensuring the existing operations team knew how their day-to-day work in “running the business” was equally critical to the university. I also spoke with the CIO of a major electronics manufacturer. She had grown up within the IT organization and knew first-hand how entrenched behaviors and tribal knowledge were major risks to her evolving to a more agile organization.  The CIO of a major international financial services company put it this way, “I have 3,000 batch jobs and do not know exactly what they do, what applications they support and connect to, and who is responsible for them.”

            I could go on with more examples, but this is a pretty good microcosm of the challenges facing the CIO today when trying to deliver greater agility while ensuring operational stability. What I take away from both events and the dozens of conversations I had is that today’s enterprise CIOs know they need to be more agile but are genuinely concerned about how that will disrupt service delivery. It seems to be a no-win situation – if you don’t move faster, IT is a bottleneck; and if you do move faster and break things, IT is unreliable. What’s a CIO to do?

            At ITinvolve, we’ve been working on this problem for nearly three years now. Actually, these challenges aren’t really brand new and we’ve been thinking about them since before the company was founded. That’s what led us to create a new IT management software company – a company dedicated to getting to the heart of the matter and solving this challenge. We believe today’s CIO needs to provide their organization with an application that brings together and proactively shares the collective knowledge within the IT organization (both systems-based as well as tribal), offers robust and highly-visual analysis of upstream and downstream impacts (not constrained by hierarchical dependency maps), and facilitates collaboration among the relevant IT experts and business stakeholders.

            With such an application, IT organizations can be more agile while avoiding unexpected outcomes that disrupt the stability and performance of services to the business. Most CIOs don’t think this is possible and are genuinely grappling with how to deliver this seemingly paradoxical agility with stability that the business demands. Until they meet ITinvolve, and they see how it’s possible to move faster, be more nimble, and still deliver reliable services to the business.

            The secret, if there is one, is People Powered IT, and only ITinvolve has it. See how it works for yourself.

            Matt Selheimer
            VP, Marketing

            Be Sociable, Share!

              5 Problems With the IT Industrial Revolution

              January 22nd, 2014

              Over the last several years there’s been lots of talk about the need for an ‘industrial revolution’ in IT. We’re actually pretty big fans of the metaphor here at ITinvolve.

              I think it’s well accepted that IT needs to improve both its speed of service delivery and quality. These are classic benefits from any industrialization effort, and they both create ripple-effect benefits in other areas too (e.g. ability to improve customer service, increased competitiveness).

              But despite all the talk and recommendations (e.g. adopt automation tools, get on board with DevOps), there are five common problems that stand in the way of the IT industrialization movement. A recent Forrester Consulting study commissioned by Chef gives us some very useful, empirical data to call these problems out for action.

              #1 – First Time Change Success Rates aren’t where they need to be. 40% of Fortune 1000 IT leaders say they have first time change success rates below 80% or simply don’t know, and another 37% say their success rates are somewhere between 80% and 95%. You can’t move fast if you aren’t able to get it right the first time, because it not only slows you down to troubleshoot and redo, but it hurts your other goal of improving quality.

              #2 – Infrastructure Change Frequency is still far too slow. 69% of Fortune 1000 IT leaders say it takes them more than a week to make infrastructure changes. With all the talk and adoption of cloud infrastructure-as-a-service, these numbers are just staggering. Whether you are making infrastructure changes to improve performance, reliability, security, or to support new service deliveries, we have to get these times down to daily or (even better) as needed. There are a lot of improvements to be made here.

              #3 – Application Change Frequency is just as bad. 69% of Fortune 1000 IT leaders say it takes them more than a week to release application code into production. Notice that it doesn’t say “to develop, test, and release code into production.” We’re talking about just releasing code that has already been written and tested. 41% say it still takes them more than a month to release code into production. Hard to believe, but the data is clear.

              #4 – IT break things far too often when making changes. 46% of Fortunate 1000 leaders reported that more than 10% of their incidents were the result of changes that IT made. Talk about hurting end user satisfaction and their perception of IT quality. What’s worse, though, is that 31% said they didn’t even know what percentage of their incidents are caused by changes made by IT!

              #5 – The megatrends (virtualization, agile development, cloud, mobile) are intensifying the situation. As the report highlights, these trends “cause complexity to explode in a nonlinear fashion.”

              So what can you do about this if you believe that “industrialization” and, therefore, automation is the answer (or at least a big part of the answer). Well, first, you have to make sure your automation is intelligent – i.e. informed and accurate. Because we all know that doing the wrong things faster will make things worse faster.

              This is the problem we’re focused on at ITinvolve; helping IT operations and developers – by giving them the knowledge and analysis they need, then facilitating collaboration to validate accuracy. Good automation must be driven by a model that fully comprehends the current state of configuration, the desired state, and the necessary changes and risks to get there. It’s only when armed with this information, can automation engineers effectively build out the scripts, run books, etc. to deliver agility with stability and quality.

              Matt Selheimer
              VP, Marketing

               
              Published in APM Digest:

               

              Be Sociable, Share!

                What a Year!

                January 8th, 2014

                2013 was an incredible year for ITinvolve.

                We grew our business dramatically

                We signed quite a few new customers and grew our subscription bookings by over 500% year over year. But more than that, we helped these organizations transform how they collaborate, share knowledge, and make more informed decisions across IT operations and with application development and business stakeholders. Check out what Dave Colesante of AlertLogic had to say about the value of ITinvolve for his organization.

                We racked up quite a few awards and acknowledgements

                In January, we were named a finalist for Pink Elephant’s Innovation of the Year Award. In May, Gartner named us one of a very select group of ‘Cool Vendors in IT Operations Management’.  We were also named ‘Best-in-Class’ for Knowledge Management by the independent ITSM Review in September. (Check out more of what the analysts have been saying about us.)

                In 2013, ITinvolve was featured over two dozen times in many leading tech publications and blogs including: American Banker, APM Digest, CIO.com, CIO Insight, ComputerWeekly, HDI Connect, itsmTV, The ITSM Review, NetworkComputing, NetworkWorld, and more.

                We delivered hundreds of innovations and enhancements in four product releases

                In February, we released Winter ’13 featuring collaborative scenario planning and integration with third-party ITSM process solutions. In June, we released Summer ’13 with unique support for tagging and advanced relationship search, advanced notification handling, expanded configurability and extensibility (without coding), and a consumer-oriented self-service portal and service catalog. In November and December we released Fall ’13 and Winter ’14, accelerating business agility and IT responsiveness with a unified service portal that supports both common and differentiated services for external customers, partners, and employees from a single service catalog as well as federating of knowledge across both ITinvolve and third-party sources, automatic creation and visualization of environment relationships, advanced change approval handling, and many usability enhancements.

                Looking ahead to 2014

                But we’re not resting on these tremendous successes from 2013. In fact, 2014 is already off to a great start with ITinvolve being shortlisted as Most Promising Start-Up for 2013-14 by The Cloud Awards! We have a great new release coming up for Spring ’14, and we are seeing a lot traction in the marketplace around how ITinvolve’s agility application helps IT organizations deliver the agility with stability that the business demands.

                As our CEO, Logan Wray, recently said: “For too long, IT leaders and professionals have struggled with incomplete and untrusted information, substantial risk when deploying new applications or making changes, and poor collaboration across IT and with business functions. In 2013, we raised the bar and some eyebrows by arming transformational leaders with the breakthrough IT agility application they need to help their businesses respond faster to opportunities and threats than ever before. In 2014, we are forecasting a continued expansion in the number of large enterprise customers relying on ITinvolve to help them implement DevOps and improve their agility.”

                Be sure to check out the problems we can help you solve and why our approach is truly innovative.

                The best is yet to come!

                Matt Selheimer
                VP, Marketing

                Be Sociable, Share!

                  DevOps Needs a Place to Work

                  December 13th, 2013

                  Because of the roots of DevOps within the Agile Software Development movement, there is a strong theme of “individuals and interactions over processes and tools” within the DevOps community (see agilemanifesto.org for more). To a significant extent, this attitude has been taken to mean tools are not really necessary and everyone can or should roll their own approach so long as they follow DevOps principles (for a good DevOps primer, check out the Wikipedia page here and the dev2ops blog here).

                  More recently, the DevOps community has begun to embrace a variety of automation and scripting tools, notably companies like Puppet Labs and Chef, because DevOps practitioners have recognized that doing everything by hand is both tedious and highly prone to error. That has led to a new term “infrastructure as code” (Dmitriy Samovskiy has a quick primer on his blog here.) But beyond automation (and, to a lesser extent, monitoring tools), the DevOps community hasn’t fully embraced the need for other types of tools to aid in DevOps work.

                  What’s more, despite this evolution around the need for automation tools, and the recognition that individuals and interactions are key, there are still a lot of walls in most organizations that impede the DevOps vision for continuous delivery of new applications. Quoting from dev2ops:

                  Development-centric folks tend to come from a mindset where change is the thing that they are paid to accomplish. The business depends on them to respond to changing needs. Because of this relationship, they are often incentivized to create as much change as possible.

                  Operations folks tend to come from a mindset where change is the enemy.  The business depends on them to keep the lights on and deliver the services that make the business money today. Operations is motivated to resist change as it undermines stability and reliability.

                  Both development and operations fundamentally see the world, and their respective roles in it, differently. Each believe [sic] that they are doing the right thing for the business… and in isolation they are both correct!

                  Adding to the Wall of Confusion is the all too common mismatch in development and operations tooling. Take a look at the popular tools that developers request and use on a daily basis. Then take a look at the popular tools that systems administrators request and use on a daily basis. With a few notable exceptions, like bug trackers and maybe SCM, it’s doubtful you’ll see much interest in using each others [sic] tools or significant integration between them. Even if there is some overlap in types of tools, often the implementations will be different in each group.

                  Nowhere is the Wall of Confusion more obvious than when it comes time for application changes to be pushed from development [to] operations. Some organizations will call it a “release” some call it a “deployment”, but one thing they can all agree on is that trouble is likely to ensue. 

                  Again, despite the recognition that some level of automation tooling for DevOps is needed, and despite the fact that individuals and interactions are seen as critical, the DevOps community hasn’t really developed a strong opinion on exactly how Dev and Ops should work together and precisely where they should do so.

                  Julie Craig of Enterprise Management Associates, describes the need pretty well in a recent whitepaper:

                  “…their tools must interoperate at some level to provide a foundation for collaborative support and Continuous Delivery.”

                  “DevOps-focused toolsets provide a common language that bridges skills, technical language, and personalities. In other words, they enable diverse personnel to seamlessly collaborate.”

                  “…tools must interoperate to support seamless collaboration across stages…data must also be shared as software moves from one stage to the next.”

                  Now, it’s all well and good to talk about the need for integration across tools and more collaboration, but where and how should Dev and Ops functions actually go to get work done together. Where and how do they best exchange information and knowledge about releases that are in process, engage with business stakeholders to validate business requirements, notify stakeholders of changes to functional specs and operational requirements? Where do they go to have an accurate understanding of the full stack required for deployment, to understand disparities and drift between pre-production and production environments, and collaborate together on deployment plans and potential risks that may exist and should be mitigated?

                  These are just a few examples of the DevOps work that must take place to enable continuous delivery, but unfortunately most DevOps practitioners are trying to use outmoded approaches or rejecting tools as viable to addressing these needs. For example, teams have tried using Wikis and SharePoint sites, “It’s on the Wiki” is an all too common refrain. Or they have fallen back on endless meetings, email chains, and real-time IMs that are limited to only select participants and with knowledge that is shared and then lost in an inbox or disappears when the IM is closed. And most DevOps practitioners will tell you they have rejected the CMDB and service support change management tools as well, because they a) don’t trust the data in their company’s CMDB (or perhaps multiple CMDBs) and b) believe traditional ITIl change tools are far too process heavy and actually work against the goals of agile development and delivery.

                  What we need instead is a place where Dev and Ops teams can actually work together and collaborate with the business – all the way from requirements planning to post-deployment issue resolution. This new workspace shouldn’t replace the tools that each group is already using and it should work with existing ITIL tools too. Instead, its purpose is to provide a unifying layer that brings together the relevant information and knowledge across the DevOps lifecycle, and employs modern social collaboration techniques to notify and engage individuals based on what they are responsible for and have opted into caring about. What’s more, it should leverage information from CMDBs and discovery tools along with a range of other information sources, and provide a mechanism for peer review to validate this information continuously, fill in gaps, and correct bad information too – so that Dev and Ops practitioners have a place they can go to access all the information they need to do their daily work efficiently and make accurate and timely decisions that move the business forward.

                  With a new DevOps workspace like this, we can finally overcome the limitations of traditional IT management tools, outmoded collaboration practices, and embrace tools that are built to support DevOps practitioners and their interactions. It’s what we call an IT agility application, and it’s what we offer at ITinvolve. You can read more about how it works in this in-depth use case document.

                  Matt Selheimer
                  VP, Marketing

                  Be Sociable, Share!

                    The Ends Justify The Work

                    December 2nd, 2013

                    My first big corporate IT job was with a major electronics company. I did desktop and server support for the smallest business unit in the company that had maybe 200 employees and just north of a billion dollars in annual sales. I loved working for the smallest group in a big company, because I got to know most of the 200 folks that worked there and I had access to all of the resources of a Global 50 company.

                    About 6 months into the job, the Director of my group left and a new guy was brought in. The new guy was fanatical about customer service and creating a partnership with the business. I immediately liked him. Unfortunately, not too many others did. He wanted to change things in a big way, but no one else wanted to change. He only lasted a year, but in that year he did a few things that helped shape how I approach my work to this day.

                    One of his first actions was to send everyone in IT to a 7 Habits of Highly Effective People class. Being the good corporate citizen that I am, I negotiated attending the class in San Diego as it was only 90 minutes away from a sales office that I supported just outside of LA. I could save the company money by combining the 7 Habits class with a site visit to the office. Did I forget to mention that this was in February and it was zero degrees where I lived?

                    I loved the class. 20 years later, the primary thing that I remember and try to use daily is this – Habit 2: Begin with the end in mind.

                    Basically, how can you get somewhere if you don’t really know where you’re going? This is amazingly relevant to folks in IT, yet rarely practiced.  We usually do a good job at our assigned tasks, but do we really understand why we’re doing them? Everyone from code developers to infrastructure people to support and operations folks are working towards a common goal. That goal is to move our respective businesses forward. Unfortunately, we’re usually too busy being heads down on our individual tasks to see and understand what we’re really doing.

                    My company, like most companies, had a set of annual goals. As I recall, they went something like this:​

                    • Deliver great products and service
                    • Service existing customers well
                    • Innovate to deliver new products
                    • Improve the quality of existing products
                    • Improve customer retention

                    IT plays a key role in the achievement or failure to achieve these goals. For example, the project you’ve just been assigned to that requires you to work late nights over several months wasn’t funded and put in place simply to keep you busy. It was funded and put in place to achieve one of those company goals.Unfortunately, many rank and file IT practitioners aren’t really aware of their employer’s goals, let alone how their daily work supports them. I’d go so far as to say that many managers, directors and even higher ups would struggle to list the company goals for the year.

                    So, why is this a big deal? Some might say, “I don’t need my database developer to be aware of what the company is doing. I pay her to develop databases.”That’s both correct and completely wrong at the same time. Yes, you might hire her to develop databases,but what is your company actually paying her for (hint: it’s not just to develop databases)? She is a member of the company as a whole, not just some cog in an IT wheel. She is a stakeholder in the company’s success, and chances are she’s pretty good at what she does otherwise she would have been let go or outsourced.

                    Regardless of how good your IT team members are, I think the majority of them can make more of an impact and feel satisfied by their work just by understanding why the work they do is important to the business.

                    Case in point – I remember being in a meeting at a pharmaceutical company I worked for some years later. The purpose of the meeting was to discuss how we were going to roll out a new application to support what our Research and Development teams were doing. The conversation got way down into the technical weeds and I blurted something out like – “It doesn’t need to be this hard…It’s not like we’re trying to cure cancer here.”  A very quiet gentleman at the head of the table cleared his throat, stood up, walked over and gave me his business card. He was the VP of Oncology R&D. He was trying to cure cancer. My company was trying to cure cancer and that meant each of us in IT were trying to cure cancer – including me. Talk about a wake up call.

                    This one meeting was a shot in the arm to remind me again to always practice Habit 2: Begin with the end in mind. If you know where you’re going, it’s going to be much easier to get there and help you make better decisions along the way.

                    After that meeting, I put some time on my Director’s calendar and asked him to work with me over the coming days to help me map what my team did to what the company’s business goals were. In some cases, it wasn’t that easy. How did a new storage array contribute to releasing 10 new pharmaceutical compounds?  How did a new desktop image enable revenue growth of at least 7%?

                    This is our challenge in IT, and you have to figure it out. Teach you peers and the folks on your team that by organizing your projects and tasks according to business goals, it’s going to be easier for everyone to prioritize your work and make better decisions. It’s going to be easier for everyone to go through your end of year review and answer questions like – “What did you do last year that was valuable to the company?”

                    What we do in IT matters. It matters to our customers. It matters to our employers and it should matter to us. But, first, you must understand how your work enables the business to achieve its goals. Only then, will you fully understand why your work really matters.

                    If you’re a rank and file member of IT that can’t get answers from your boss to help you connect your work to business goals, then keep pushing but also take the initiative yourself. Read your company’s annual reports and SEC filings. Maybe even take the bold step of requesting a meeting with your CIO. Walk into that meeting and say, “Hi, I work in this department and I do X. I want to help our company move forward as best as I can. I’ve been reading about our company goals and I want to be sure the work I do supports them. Can you help me validate that the work I am doing is what the company wants me to focus on to help deliver on our goals?” You may be very surprised (in a good way) at the response you get.

                    Joe Rogers

                    Director of Technical Services

                    Be Sociable, Share!