Contact ITInvolve
x 


ITinvolve Blog

ITinvolve Blog

Archive for the ‘Social IT’ Category

Improving Configuration Management: Getting Control Over Drift

Monday, April 28th, 2014

Configuration Drift poses a number of challenges to your IT organization and your business; for example the risk of non-compliance with security policies, performance and availability issues, and the failed deployment of new application releases.

To address drift most IT organizations have now employed some combination of scripts (or automation tools), a configuration management database (CMDB), and have defined a software configuration management approval process.  Despite these efforts, we find that configuration drift still occurs a lot in large enterprises.

Why is this the case?

First, if you are like most IT organizations, you probably follow the 80/20 rule with your administrators focusing 80% of their time on the configuration elements they consider most important to their roles and that leaves quite a gap where drift can still occur.  What’s more, if you are using scripts and automation tools to enforce configurations, it’s important to keep in mind these approaches rely on an explicit formula – meaning you have to specify exactly which configuration settings to enforce and when.  This leaves things pretty much wide open that settings you haven’t gotten around to specifying can be changed and additional software installed that might cause problems.

For example, let’s say that your security policy states that a certain range of TCP/IP ports should not be open on a certain class of servers.  You might reinforce this policy with an automation script that routinely verifies the port status and closes any ports in the range that may have been opened through some other means.  Sounds like you’ve got things covered, right?  Well, what if that port was opened as part of a change process to deploy a new application to one of those servers, and what if those working on the project knew nothing about the TCP/IP port enforcement script.  They deploy the new application, test it to make sure all is working well, and then send out the email to the user community letting them know the new application has been launched – a great day for IT enabling the business! Then, overnight (or the next time the script is scheduled to run), the port is closed.  Users come into work the next day and are unable to access the new application, calls start coming into your service desk, an all hands on deck meeting is hastily assembled, and, after some period of time, the port closure is identified as the issue and the port is reopened – only to have it closed again the next time the script runs – until finally someone realizes this script is the underlying cause (because probably the person who wrote it is no longer there and they didn’t document it other via a notation in an audit report that a script was the enforcement mechanism selected.)

Consider another example, where we have an application that has very low utilization most days except for spikes of activity at the end of each month (such as an application that accepts orders from a dealer network).  Let’s say an engineer is looking for some available equipment to install a new application on and identifies the same server running the dealer order system as a good candidate because of its strong specs and low average utilization.  He installs the new app and everything is working great until the end of the month when the dealer network comes alive with hundreds of orders every hour.  Now because we have two applications vying for the same physical components, we start to see performance issues and scramble move the new application to other hardware, taking it offline in the process, and putting it on an available server with lesser specs causing it to run slower than before irritating the user community even further.  In this scenario, your automation scripts would have done nothing to prevent this drift from the expected configuration (i.e. the dealer order system is the only application running on this box), because they would have no awareness that the new application even existed.  What’s more, automation could have actually made things worse if you had employed a strategy to periodically wipe and rebuild your machines (these are referred to as “phoenix servers” and it’s another strategy some have tried to reduce drift) – because, in this case, if you had followed such an approach your new app would have been erased from your data center entirely at the new rebuild.

So how can you get control over drift and avoid these sorts of issues?

First, the scripts and automations you have running need to be documented including what they do, when they run, and who is responsible for them.  With this information, you can make people proactively aware of any script and configuration conflicts as part of your change and release management process.  This will help you avoid the first example where the TCP/IP port was unexpectedly closed, because your team is more aware of and can account for the fact that there needs to be an exception to your TCP/IP port range – not only updating the script to reflect this but also documenting the exception proactively for your auditors.

Second, with accurate documentation about how your environment and key applications are configured, you can better understand why that dealer order system was running on equipment all by itself (because the tribal knowledge about the end of month peak loads was documented), and you can then also compare the current state against the expected state to identify drift issues and take action to address them as appropriate.  For example, you might trigger an incident and assign ownership to the relevant administrators who own the automations for that equipment and/or applications.

ITinvolve’s Drift Manager can help you implement both capabilities and more.  Drift Manager helps you document scripts and automations as well as “gold standard” configuration settings leveraging information you already have (via importing or federation) while also capturing the undocumented tribal knowledge and validating it through social collaboration methods and peer review.  Drift Manager also helps you compare the current vs. expected state in real-time and then facilitates raising drift incidents when required.  What’s more, ITinvolve helps you “broadcast” upcoming configuration changes so all relevant experts are included in your configuration management process and can fully assess the risk and implications to avoid the kinds of issues discussed above.  Finally, it ensures your teams are aware of the policies that govern your resources so that, as configuration changes are being considered, the potential policy impacts are considered at the same time.

No matter your approach, configuration drift will happen.  The question is, do you know about it when it happens and can you get the right experts quickly engaged to address it without causing other issues?

Matt Selheimer
SVP, Marketing

Agility With Stability

Friday, February 28th, 2014

Earlier this week, I attended Gartner’s CIO Leadership Forum in Phoenix, Arizona. This event drew 600 CIOs from the US and Latin America as well as a few from “across the pond.” Last week, I attended CIOsynergy Atlanta which drew more than 150 CIOs, CTOs, and VPs of IT from across the Southeast US. At both events there was a strong desire and great interest in how IT organizations can achieve greater agility while ensuring the stability their businesses also demand.

The challenge of agility with stability expressed itself in different ways depending upon the industry and culture of the IT organization. For example, in Atlanta, I spoke with the head of mobility for a major US department store who is focused on enabling greater agility in the consumer mobile experience but is challenged by the integrations required to legacy systems and PCI requirements. Another IT leader in Atlanta, working at a major hotel chain, said he felt like he had the PCI challenge under control but struggled to avoid unforeseen impacts from IT changes. One of the CIO panelists, who heads up IT for a multi-billion dollar heavy manufacturer, described her agility challenge in this way, “We need to do a much better job of documenting the spaghetti we’ve built up in IT; we need a living picture of all the relationships that tie our systems together.” It was this lack of documentation and understanding of dependencies that she felt was the critical challenge holding her back in transforming her IT organization to be more agile.

In Phoenix at the Gartner CIO Forum, I spoke with the CIO of a large regional university. He said that he had a very entrenched culture in his IT organization and was going to follow Gartner’s recommendation for “bi-modal” IT and set up a separate team chartered with driving agile development projects while ensuring the existing operations team knew how their day-to-day work in “running the business” was equally critical to the university. I also spoke with the CIO of a major electronics manufacturer. She had grown up within the IT organization and knew first-hand how entrenched behaviors and tribal knowledge were major risks to her evolving to a more agile organization.  The CIO of a major international financial services company put it this way, “I have 3,000 batch jobs and do not know exactly what they do, what applications they support and connect to, and who is responsible for them.”

I could go on with more examples, but this is a pretty good microcosm of the challenges facing the CIO today when trying to deliver greater agility while ensuring operational stability. What I take away from both events and the dozens of conversations I had is that today’s enterprise CIOs know they need to be more agile but are genuinely concerned about how that will disrupt service delivery. It seems to be a no-win situation – if you don’t move faster, IT is a bottleneck; and if you do move faster and break things, IT is unreliable. What’s a CIO to do?

At ITinvolve, we’ve been working on this problem for nearly three years now. Actually, these challenges aren’t really brand new and we’ve been thinking about them since before the company was founded. That’s what led us to create a new IT management software company – a company dedicated to getting to the heart of the matter and solving this challenge. We believe today’s CIO needs to provide their organization with an application that brings together and proactively shares the collective knowledge within the IT organization (both systems-based as well as tribal), offers robust and highly-visual analysis of upstream and downstream impacts (not constrained by hierarchical dependency maps), and facilitates collaboration among the relevant IT experts and business stakeholders.

With such an application, IT organizations can be more agile while avoiding unexpected outcomes that disrupt the stability and performance of services to the business. Most CIOs don’t think this is possible and are genuinely grappling with how to deliver this seemingly paradoxical agility with stability that the business demands. Until they meet ITinvolve, and they see how it’s possible to move faster, be more nimble, and still deliver reliable services to the business.

The secret, if there is one, is People Powered IT, and only ITinvolve has it. See how it works for yourself.

Matt Selheimer
VP, Marketing

DevOps Needs a Place to Work

Friday, December 13th, 2013

Because of the roots of DevOps within the Agile Software Development movement, there is a strong theme of “individuals and interactions over processes and tools” within the DevOps community (see agilemanifesto.org for more). To a significant extent, this attitude has been taken to mean tools are not really necessary and everyone can or should roll their own approach so long as they follow DevOps principles (for a good DevOps primer, check out the Wikipedia page here and the dev2ops blog here).

More recently, the DevOps community has begun to embrace a variety of automation and scripting tools, notably companies like Puppet Labs and Chef, because DevOps practitioners have recognized that doing everything by hand is both tedious and highly prone to error. That has led to a new term “infrastructure as code” (Dmitriy Samovskiy has a quick primer on his blog here.) But beyond automation (and, to a lesser extent, monitoring tools), the DevOps community hasn’t fully embraced the need for other types of tools to aid in DevOps work.

What’s more, despite this evolution around the need for automation tools, and the recognition that individuals and interactions are key, there are still a lot of walls in most organizations that impede the DevOps vision for continuous delivery of new applications. Quoting from dev2ops:

Development-centric folks tend to come from a mindset where change is the thing that they are paid to accomplish. The business depends on them to respond to changing needs. Because of this relationship, they are often incentivized to create as much change as possible.

Operations folks tend to come from a mindset where change is the enemy.  The business depends on them to keep the lights on and deliver the services that make the business money today. Operations is motivated to resist change as it undermines stability and reliability.

Both development and operations fundamentally see the world, and their respective roles in it, differently. Each believe [sic] that they are doing the right thing for the business… and in isolation they are both correct!

Adding to the Wall of Confusion is the all too common mismatch in development and operations tooling. Take a look at the popular tools that developers request and use on a daily basis. Then take a look at the popular tools that systems administrators request and use on a daily basis. With a few notable exceptions, like bug trackers and maybe SCM, it’s doubtful you’ll see much interest in using each others [sic] tools or significant integration between them. Even if there is some overlap in types of tools, often the implementations will be different in each group.

Nowhere is the Wall of Confusion more obvious than when it comes time for application changes to be pushed from development [to] operations. Some organizations will call it a “release” some call it a “deployment”, but one thing they can all agree on is that trouble is likely to ensue. 

Again, despite the recognition that some level of automation tooling for DevOps is needed, and despite the fact that individuals and interactions are seen as critical, the DevOps community hasn’t really developed a strong opinion on exactly how Dev and Ops should work together and precisely where they should do so.

Julie Craig of Enterprise Management Associates, describes the need pretty well in a recent whitepaper:

“…their tools must interoperate at some level to provide a foundation for collaborative support and Continuous Delivery.”

“DevOps-focused toolsets provide a common language that bridges skills, technical language, and personalities. In other words, they enable diverse personnel to seamlessly collaborate.”

“…tools must interoperate to support seamless collaboration across stages…data must also be shared as software moves from one stage to the next.”

Now, it’s all well and good to talk about the need for integration across tools and more collaboration, but where and how should Dev and Ops functions actually go to get work done together. Where and how do they best exchange information and knowledge about releases that are in process, engage with business stakeholders to validate business requirements, notify stakeholders of changes to functional specs and operational requirements? Where do they go to have an accurate understanding of the full stack required for deployment, to understand disparities and drift between pre-production and production environments, and collaborate together on deployment plans and potential risks that may exist and should be mitigated?

These are just a few examples of the DevOps work that must take place to enable continuous delivery, but unfortunately most DevOps practitioners are trying to use outmoded approaches or rejecting tools as viable to addressing these needs. For example, teams have tried using Wikis and SharePoint sites, “It’s on the Wiki” is an all too common refrain. Or they have fallen back on endless meetings, email chains, and real-time IMs that are limited to only select participants and with knowledge that is shared and then lost in an inbox or disappears when the IM is closed. And most DevOps practitioners will tell you they have rejected the CMDB and service support change management tools as well, because they a) don’t trust the data in their company’s CMDB (or perhaps multiple CMDBs) and b) believe traditional ITIl change tools are far too process heavy and actually work against the goals of agile development and delivery.

What we need instead is a place where Dev and Ops teams can actually work together and collaborate with the business – all the way from requirements planning to post-deployment issue resolution. This new workspace shouldn’t replace the tools that each group is already using and it should work with existing ITIL tools too. Instead, its purpose is to provide a unifying layer that brings together the relevant information and knowledge across the DevOps lifecycle, and employs modern social collaboration techniques to notify and engage individuals based on what they are responsible for and have opted into caring about. What’s more, it should leverage information from CMDBs and discovery tools along with a range of other information sources, and provide a mechanism for peer review to validate this information continuously, fill in gaps, and correct bad information too – so that Dev and Ops practitioners have a place they can go to access all the information they need to do their daily work efficiently and make accurate and timely decisions that move the business forward.

With a new DevOps workspace like this, we can finally overcome the limitations of traditional IT management tools, outmoded collaboration practices, and embrace tools that are built to support DevOps practitioners and their interactions. It’s what we call an IT agility application, and it’s what we offer at ITinvolve. You can read more about how it works in this in-depth use case document.

Matt Selheimer
VP, Marketing

Knowledge isn’t free and neither are the donuts

Wednesday, November 20th, 2013

One of my former bosses is a Naval Reservist. Once a month, he goes away for the weekend to do his job for Uncle Sam. In the break room of whatever building he’s working at, there is a sign above a table that says – “Freedom isn’t free, Neither are the Donuts.” The idea is that if you take a donut, you’re supposed to drop some money in the jar so the tomorrow’s donuts can be purchased.

This got me thinking about other things that have costs associated with them, like experience and knowledge. So with a hat tip to the Navy, I submit that “Knowledge isn’t free, Neither are the donuts.”

There is a cost to developing and utilizing knowledge. There is the time it takes to develop the knowledge. For example, someone needs to write it down, make the video, etc. That takes time, which has a cost associated with it. An obvious cost is the knowledge developer’s hourly wage, but what about other costs? What else could that knowledge developer been doing instead of documenting the knowledge? Is their main job to develop knowledge or do they have other duties that have to be put on hold while they document?

We’ve gotten used to having much of the entire world’s knowledge in the palm of our hands via the Internet, Google and our smartphones. We can search for an answer to just about any question we might have, and at least 99% of the time, we can get a reasonable answer. Of course, depending on the question, I may have to spend time weeding through a whole bunch of entries to find it. I may have to ask the question a few different ways, or (heaven forbid) try to remember a little bit of Boolean logic to filter out unwanted results. In this sense, there is a very real cost with not just documenting but also accessing knowledge — my time.  What else could I be working on instead of trying to find this information?  How much time am I spending piecing together information, data or knowledge from other sources to figure out the truth?

Now, let me repeat this exercise, but with my work hat on. I vividly remember a few years ago that I was desperately searching for information to help me with a high priority project I was working on — upgrading the Point of Sales (POS) system. [The project goal was to deploy a newer version with better reporting capabilities for our logistics folks.] In this context, I was no longer going to my phone and googling something, instead I had to find and then go through tons of information about my company’s implementation and use of the POS system. No longer was everything nicely catalogued and key worded. No longer was everything easy to find. There were literally hundreds of places that I needed to go in order to find the data. There were build documents, CMDB entries, a bunch of Sharepoint sites, dozens of emails, file shares, several change management requests, some Wikis, and much more. The stuff was scattered everywhere and I had no idea if the three-year-old Visio diagram that I found was in any way representative of what was currently humming along in our data center 600 miles away.

So that meant I now had to seek out people – experts that could provide me with additional information or at least validate the information that I had earlier found. But first, whom did I need to talk to? The author of the Visio left the company 18 months prior. The VP of Retail Operations, who signed off on the Functional Requirements Spec, now worked for a competitor. Just trying to figure out the people who could maybe get me the information that I needed to do my job was a full time job unto itself. And, this wasn’t my only project, I had 15 other projects at the same time that I was involved with. Unfortunately, those other projects (and their value to the business) had to wait — even though people were waiting on for a number of tasks.

Fast forward a few weeks, and I’ve now got a bunch of information that I’m told through various meetings, phone conversations, and emails is current. I have a bunch of new hard copy documents on my desk, a picture of a white board diagram that a DBA spent an hour drawing for me, lots of emails, word documents, PDFs, you name it. I added that to the information I got from the vendor, information from three of our six CMDBs as well as some reports from three different monitoring tools. Now, I could finally start working on my tasks for the project.

I probably spent at least 30 hours trying to look for and validate this data and information. As an architect, my fully loaded hourly wage was around $75. So just assembling this information cost my company $2,250 not to mention the time that it cost from everyone I talked to which was easily several multiples of that. Plus, they all had other things to do that I was pulling them away from and there is a cost to that too.

Now the fun began. We didn’t have money for new hardware for this project so I had to make sure the new version of the POS software would run on what we had already. I went through all the information I had assembled. The build specs, in particular, were hilarious. Each was 60 pages long and there was so much boilerplate and filler in each one that it was mind numbing to read them all. Our CMDBs were filled with a bunch of useless (for my purposes) information that had been populated by our auto discovery tools, and I quickly started to drown in information. What I really needed to understand were just a couple of configuration settings to see if they were A) required, B) needed to be updated or C) completely irrelevant to what I was doing.

Aside from the Easter Egg hunt that I had to go through just to find the information I needed to do my job, I found the information itself was barely relevant. My company made IT people fill out all of this paperwork when they did something or built something, and most of these folks are IT majors not English majors. The people that came up with the document templates probably had no idea what kind of information would need to be captured, so what you think would be in one section is in another or the information didn’t fit the template so it had to truncated.

At this point, I didn’t have much confidence this upgrade will be successful. There were just too many unknown or little knowns, and it wasn’t from a lack of data. I actually had too much that it was hard to find the proverbial needle in the haystack I needed. What’s worse, what I had was mostly bad, or old or incorrect data. I couldn’t trust it and it didn’t provide the right context for me to effectively use it.

Unfortunately, this situation is very typical of many IT shops – the information and knowledge is not well documented and what is documented is a huge mess. IT information has a lifecycle of its own and the common methods of documenting, collecting, and accessing IT information are inadequate. IT Information is a living, breathing thing that must be nourished and maintained on an ongoing basis as part of daily work, because no one has time to do it after the fact. And even then, we’re really good at burying it in file shares, spreadsheets, emails and Sharepoint sites, so as soon as the information is created, it’s quickly forgotten about and withers and dies.  Then, you’re like me in this story, where all you have left is a corpse of information that might have ben useful once but probably doesn’t have much value today, because we move  so fast and what was true yesterday isn’t true today since that OS patch was put in place or the backup schedule was updated.

So what happened with the POS upgrade? Well, even with my lack of confidence, the change request for the upgrade was approved – because it had to be. There was no testing because we didn’t have a properly configured test environment, but I did ensure we had a full backup just in case something went wrong. We held our breath and we rolled it out. Amazingly it worked out okay, but to this day I feel like we just got lucky. Knowledge isn’t free, and this all to common status quo is no way to run an IT shop when your business depends on it.

 

IT Advice from Bill Nye “The Science Guy”

Tuesday, November 19th, 2013

Bill-Nye

I recently came across this quote and thought it very apropos to the situation in today’s complex IT organizations. Whether you are talking about server, storage, and network admins; developers; QA teams; security managers; and the many other experts in a typical IT organization, the fact is everyone in IT has specialized knowledge and a unique perspective on what they are responsible for.

It’s all too easy to get caught up in these individual perspectives and miss out on the big picture. But worse than that, because there are so many items associated with delivering a given application or service, and many of these items like a policy or unique setting an individual expert is not aware of, the best intended actions can often produce unexpectedly bad outcomes. Our own experiences and a quick Google search reveals that it is still all too common that outages are caused by “human error” and not an equipment failure or code issue.

George Spafford at Gartner has state the problem this way, “It is becoming impossible for any person or group [within IT] to completely understand how everything integrates together.” Because we don’t know what we don’t know, we can be lulled into a false sense of security as the bad outcomes all too clearly illustrate.

In response, a lot of IT organizations have tried to attack the problem with email, meetings and formalized change processes. This has helped many companies identify and minimize risks to a certain degree, but they have exchanged this benefit for a much slower change rate and over-involving too many personnel in change management.

A recently published metric from industry consulting firm Pink Elephant found that the average time from creation of a change request to its execution was 31 days! Whether the change is in response to a business need or is applying a patch or upgrade to make infrastructure better performing and resilient, I think we can all agree that a month is far too long. And a month is just the average! I am sure that complex changes with many change items greatly exceed a month in many IT organizations today. We need to do better as an industry – and that goes not just for practitioners but vendors and consultants too.

Here’s the second part of the problem with many current approaches. Because IT operations teams are rightly concerned about the instability that change represents, they pull far too many people into change planning meetings and change advisory board (CAB) meetings who don’t really need to be there or who could have just as easily provided their input offline. I can’t tell you how many times I have heard a change process owner complain about how they send emails out to change approvers and then have to hunt them down in person to get them to login to the change system and respond. And for their part, those approvers often complain that they get so many emails from the change system they can’t distinguish which are important and just end up ignoring them all.

So this brings me back to Bill Nye and his astute observation that we can learn something from everyone we meet. Let’s accept the fact that each of us doesn’t know everything we need to know to effectively manage today’s complex IT environments, despite the fact that we may indeed be experts in a particular area. It is only by capturing our collective knowledge and making it available to everyone that we can have a complete understanding of dependencies and risks. By using a modern approach like ITinvolve that allows IT knowledge workers to follow what they are responsible for or have an interest in, we can leverage the knowledge of others AND identify exactly who the right experts are and proactively engage them in a virtual collaboration to assess risk.

The result is that risks can be assessed more accurately but also more quickly, and without pulling people unnecessarily off of whatever else they are working on too. This assessment can then be provided proactively to the CAB and CAB members can approve or reject offline from meetings at a time of their convenience. If all CAB members approve, the change doesn’t even need to get discussed in the formal CAB meeting and can move straight to execution. This then enables IT to focus CAB meetings on the really important and high-risk changes that everyone hasn’t approved.

To get there, the first step is simply to recognize and appreciate that you can learn a lot from others by sharing what you know and having everyone do the same. We often here statements from our customers like “I’ve learned more about our environment using ITinvolve in the last three weeks than in the last five years I’ve worked here.” This is the reality, no matter how much of an expert we are, our knowledge of today’s complex IT landscape is limited. It’s only by working together and sharing what we know that we can deliver on our mission of helping IT become more agile while minimizing risk.

Matt Selheimer
VP of Marketing

Does your IT racecar enable agile business victories or are you stuck in the pits?

Monday, November 4th, 2013

In today’s business world, “fast eats slow.” And the IT department is now the primary driver (or inhibitor) for how fast a business can respond to the market and create new competitive advantages.

Is your business like a Formula 1 racecar, operating at top speed and agilely navigating the twists and turns of the track until you cross the finish line first? Or do you find that your racecar is frequently stuck in the pits with tire changes and other adjustments, or worse have you had a high profile blow out that dashed your company’s hopes of victory?

If you’re like most IT operations leaders I’ve spoken with in the last year:

  • Your environments have become more and more (not less) complex
  • You feel like you’re already operating at a frenetic pace
  • You worry that moving faster will lead to even less stable service delivery

If you’re an application development leader, you’ve probably gotten better and developing new releases faster by adopting agile principles or other related methodologies, but feel that operations simply can’t keep up and is holding you back. And if you’re a CIO, you are probably frustrated by the fact that development and operations aren’t working well and collaborating together to deliver on what your business needs.

I’m not going to suggest that you forklift how you develop software or run IT operations. That would be impractical and unreasonable. However, we’ve identified four areas that you can improve individually to help your business become more agile. And if you tackle all four of them, you will deliver a breakthrough in business agility.

These four areas exist at the boundaries between application development and IT operations. By focusing on improving collaboration and knowledge sharing in these four areas, you can foster a new hybrid DevOps function that will create new opportunities for victory in whatever industry you operate.

  1. Requirements Collaboration and Validation – As a project moves from business goals, to business requirements, to functional requirements, and to operational requirements the amount and frequency of handoffs between the business, application development, and IT operations is very large. At each handoff information can be lost or distorted. There’s actually a pretty well known and humorous cartoon about a tire swing that illustrates this problem.

What’s needed is a method to engage the right business, development and IT operations stakeholders to collaborate on and validate business, functional, and technical requirements. And this method should preserve those collaborations and their output so that others can view and access the information as they engage on project or support the release once deployed.

  1. Environment Synchronization – Have your ever found out right before a “go live” date that some of the operational requirements have changed? Or have you made changes in production that weren’t effectively communicated back to those responsible for pre-production QA and testing? With the pace IT operates today, this happens all the time in most IT organizations. The result is that launches are delayed because new requirements can’t be supported (e.g. I didn’t know we were going to need a UNIX environment for this project) or what’s tested in pre-production works fine only to fall over when deployed in production.

What’s needed is a method to synchronize configurations and operational requirements across the promote to production landscape, as well as the ability to communicate proactively any production changes that are relevant to development and testing efforts. Without this, we will continue to ship late and deliver unreliable applications that cause our business goals to be unmet and our business colleagues to continue to be frustrated by IT.

  1. Release Collaboration and Deployment – As you move closer to the release date, it’s critical to assess risks and timing. Unfortunately, risks assessments are often a time-consuming process and so they are cut short and only done superficially or they slow everything down causing the business value to be delayed or window of opportunity to be missed.

What’s needed is a method to engage business, development, and operations stakeholders to assess deployment risks, timing, and validate plans. And this method must incorporate what’s learned at each stage of the development lifecycle, and do so in a way that is inline with doing their daily jobs and not some big overhead administrative effort.

  1. Post-deployment Resolution and Collaboration – Once a new application or service is rolled out, there are likely to be some issues. All too often, we don’t get the right resources engaged to work the issue and that causes frustration on the part of the business, because the timing to resolve the issue elongates. What’s more, in an effort to get things fixed “quickly”, there is all hands on deck firefighting that pulls people off of new projects and far too many escalations of the same issues to senior resources without pushing down post-deployment learning to less senior staff to make use of. The hero culture so prevalent in IT, comes back to burn the organization because everyone gets caught up in this unplanned work instead of focusing on moving the business forward.

What’s needed is a method to identify and appropriate escalate issues to development or operations with just-in-time, in-context knowledge for rapid diagnosis, while also capturing experience to quickly solve ongoing issues without pulling valuable resources off of new assignments.

A solution like ITinvolve  that uniquely combines knowledge capture, analysis, and social collaboration for IT, enables you to do each of these things. And if you tackle all four areas, you will:

  • Accelerate your business’ response to market opportunities and competitive threats
  • Align business requirements, IT projects, development teams, and IT operations personnel to enable continuous delivery of new application releases
  • Minimize operational and compliance risks from new application releases (including those that have legacy integration requirements)

If you’re ready to change the status quo, we’re ready to help you.

Matt Selheimer
VP, Marketing

A Growing Sense of Urgency in IT

Thursday, October 24th, 2013

Recently, I attended Gartner Symposium / ITxpo, which is the largest gathering of CIOs and IT leaders annually in the US (this year’s attendance was over 12,000). I also attended the annual Fusion event, jointly hosted by the itSMF and HDI, which attracts around 1,500 IT service support and delivery professionals and managers.

At Symposium, there was a strong and palpable sense of urgency that IT must adapt quickly to help their businesses exploit the “digital industrial economy.” This was a key focus not only in the keynote sessions, but also reflected in the individual track sessions – especially those with CIO interviews and end customer panels. It was also a topic that attendees were eager to discuss during conversations in the expo hall.

This energy is coming from the increasing pressure CIOs, VPs of App Dev, VPs of Infrastructure & Operations, and Enterprise Architects are feeling from their business colleagues. But it’s also coming from within as CIOs and VPs in IT are increasingly frustrated by their own organizations inability to adapt.  Everywhere you heard statements such as “How do we act more like a start up in IT?”  Each of these IT leaders asking this question understand they have to transform how IT works today or not only themselves, but their businesses, could quickly become irrelevant in a rapidly changing economy.

For example, I talked to one CIO of a multi-billion dollar manufacturer who said, “My Application Development team uses one tool to manage requirements, my Project Managers use another tool for project scheduling, and my IT Operations personnel used a myriad of other tools to coordinate changes and day-to-day operations. No one is sharing information effectively with one another, and I can’t even draw a clear line of sight between our business priorities and the work that is going on in IT.”

Another Enterprise Architecture leader from a major financial services company said, “Every team has their own source of knowledge they use to do their job, and the same or similar knowledge is replicated all over the place. All of this means coordinating projects and collaborating across teams is confusing, time-consuming, and causes delays in responding to shifting business requirements.” I even spoke with a VP of Application Development and VP of Infrastructure & Operations from the same company who said they were best friends since childhood, spent time together socially, and even they couldn’t get their respective teams to collaborate effectively.

I walked away from Symposium feeling like there is a strong sense among the US IT leadership community that ineffective knowledge sharing and collaboration is truly holding them back, and that not only IT Operations, but Application Development, and Enterprise Architecture teams must confront this issue head on in order to help their businesses succeed in the rapidly changing digital economy.

Fast forward to this week and the Fusion conference, which also featured a strong focus in the keynotes on the need to move faster, collaborate, and put more emphasis on the people in IT. Yes, there was also a strong emphasis on processes (this is after all the biggest ITIL-related conference of the year), but there was also a strong sense of the need to adapt to a model with “just enough” process and a greater emphasis on agility and flexibility.

What’s more, in conversations I had in the expo hall, there was a strong recognition of the need to improve collaboration and knowledge sharing in IT service support and operations. There was also a strong recognition that SharePoint isn’t the answer, but many of the attendees I spoke with also expressed a level of frustration that they had tried to convince their senior management of the need, but were shot down. In fact, the person responsible for transitioning new services into production at one of the largest insurers in the US told me that she was just going to wait until something big failed to convince management to spend resources to fix the issue. Incredible to hear, but, unfortunately, true.

Others I spoke with were more positive that they could change things, particularly those who had recently taken over service support and delivery after running another key function in IT such as data center operations or infrastructure engineering. In short, those who have been in service support and delivery for a while were resigned to an inability to affect change, while those newly in those roles were still optimistic.

Reflecting on these two experiences, it is clear to me that there is a common understanding of the need from CIOs down to practitioners that they must improve how IT collaborates, how IT shares knowledge, and, ultimately, how IT gets work done across teams. IT leaders, as evidenced by the conversations I had at Symposium, are actively looking for ideas and recommendations, but the practitioners exemplified by those at Fusion are frustrated that their past efforts have fallen on deaf ears and have in many ways accepted that they will have to “do the best with what I’ve got.”

In many cases, I believe those senior IT leaders may have been right in shooting down recommendations from practitioners that were often either what seemed like process for process sake or the opposite of everyone for themselves in a “wild west” scenario.  But we can’t let these communication problems get in the way of addressing this sense of urgency that everyone alike is feeling. IT leaders must actively work with practitioners to develop recommendations that will foster improved collaboration and knowledge-sharing across functions, and they must also work together to build the business case and justify the effort by tying it back to how this will help the business adapt and compete in a rapidly changing economy.

The alternative is nothing less than the continued marginalization of centralized IT and the flowing of IT dollars to the business, or worse the business literally going out of business because IT can’t move faster. Let me know what you think.

Matt Selheimer
VP, Marketing

Are you operationally compliant?

Thursday, October 3rd, 2013

How ready are we for our next audit? Will the auditors find any surprises and how much effort will be required to remediate them? Do we have a good handle on which policies govern which resources so we can make decisions with this information at hand?

If you are like most IT leaders, these questions are always in the back of your mind and frequently in the front of your mind as well. Businesses are more dependent on IT than ever before, and in every industry and company there are a range of laws, regulations, standards, security procedures, and other internal policies that govern data about customers, company financials, employees, and more.

Growing IT complexity, including use of cloud-based services and 3rd party service providers, combined with a long-standing pattern of scattered and tribal knowledge about IT environments, and the policies which govern them, is making it more and more difficult for IT organizations to ensure effective policy compliance.

It used to be the case that IT could demonstrate compliance by audit, issue identification, and remediation. But those days are gone in a world where IT directly touches the customer and compliance issues directly impact customers, revenue, and reputation.

Manufacturers have discovered that quality and compliance done externally to operational processes, rather than inline, have failed to achieve the desired results. In fact, the rework to remediate compliance issues in manufacturing slowed time to market and increased cost. IT must adopt a similar approach as well. IT needs ongoing policy compliance that is delivered inline as part of daily operations activities rather than measured after the fact.

We believe that harnessing your systems and people knowledge, and combining it with visual analysis and collaboration, is the key to understanding which policies govern your IT resources and enabling operational policy compliance.

According to Gartner, “Few IT organizations have effective education techniques that reinforce the purpose of policies, and help employees become self-sufficient when using policies on a day-to-day basis.” (“IT Policies Checklist and Content Best Practices,” J. Mahoney, A. Rowsell-Jones, H. Colella, 19 June 2013)

With ITinvolve , you can empower your IT teams to ensure operational policy compliance inline with their daily tasks, so IT can move faster while also proactively meeting the compliance expectations of the business. Only ITinvolve brings together knowledge, analysis, and collaboration in one solution to:

  • Easily and clearly understand and visualize the dependencies between policies, IT infrastructure, and applications
  • Identify and proactively engage the right stakeholders to assess risk
  • Integrate security events and other compliance-related data sources to provide unprecedented transparency and visibility

Watch a short video  so you can see first hand how ITinvolve will help you:

  • Avoid compliance issues from IT changes
  • Ensure issue resolution takes policy impacts into account
  • Produce an ongoing, automated audit trail as part of daily operations

Don’t wait for your next audit to find out where you aren’t compliant and how many resources need to be pulled off of critical projects to remediate the issues. Get ahead of the game and ensure you are always compliant and never fear an audit again.

Matt Selheimer
VP, Marketing

Faster, More Effective Issue Resolution

Wednesday, September 25th, 2013

Do your business users expect always on, always performing IT services? Of course they do. That’s the nature of enterprise IT service delivery today.

But with complex IT environments, multi-tiered application dependencies, rapid changes, and strict policies, you likely have a fair share of service degradations, outages, and out of compliance issues that make this objective very difficult – even if you are the best managed IT shop around.

In our experience, there are two critical strategies that can help you address this challenge:

1)   Work to avoid issues in the first place through collaborative change impact analysis (80% of service impacting issues are typically caused by well-intentioned changes)

2)   Resolve issues faster by reducing the time and effort to identify potential root causes and fix them (80% of the mean time to restore service, MTRS, is typically spent trying to understand what’s changed in the environment and identifying potential root causes)

A big reason why is the limited knowledge sharing and facilitated collaboration in IT today. For example, Gartner has said: “In talking with Gartner clients who have fast-growing and/or complex environments, we see that it is becoming impossible for any person or group to completely understand how everything integrates together.” (Gartner, “A Two-pronged Strategy for Stabilizing IT Services”, G. Spafford, 27 February 2012)

That’s why we believe harnessing all of your people’s IT knowledge and combining it with visual analysis and collaboration is the key to resolving issues faster and enabling more effective and permanent resolutions to recurring issues. By empowering IT teams to understand a complex world where one person can no longer know it all, ITinvolve is the first to deliver on the promises made by so many other technologies that only focus on processes or automation. Check out this short video to see our approach in action.

Only ITinvolve Knowledge Collaborator brings together knowledge, analysis, and collaboration in one solution to:

  • Easily leverage prior experience
  • Rapidly understand the most likely root causes
  • Identify and proactively engage the right experts
  • Capture resolutions and make them easily accessible for future use

And that means, With ITinvolve your IT teams will have the information they need at their fingertips to:

  • Resolve complex escalated issues faster
  • Avoid the same issues being escalated over and over
  • Eliminate all hands on deck firefighting
  • Delight business leaders and users with how quickly service is restored

But don’t just take our word for it. Here’s what Don Ringelestein, director of technology for West Aurora School District 129, has to say:

“We needed a way to share knowledge across the organization so it’s more accessible to everyone, in order to provide a more consistent and high quality service experience. We are building a social knowledge system that enables us to tell a story around all of our devices – at the end of the day, ITinvolve saves us money because we don’t spend time anymore calling around, sifting through e-mails and asking about a particular device to get what we need to solve an issue.” You can read more of Don’s story here.

If you’re struggling to meet increasing demands for always on, always performing services, reach out to us. We’d be glad to help.

Matt Selheimer
VP, Marketing

More Infrastructure Changes with Less Risk

Thursday, August 22nd, 2013

Ask any Infrastructure & Operations leader if they’d like to handle more infrastructure changes with less risk to their business and you will get a resounding, “Yes!” However, this has been an elusive, and often frustrating goal for many. In fact, most IT organizations have so locked down their change process in order to avoid risks that the pace of change is little more than a crawl. Yet, 80% of business outages are still caused by IT changes. (A CIO of a major airline actually told me recently that it’s more like 98% of business outages are caused by IT changes for his company – ouch.)

Just last week, the New York Times experienced a high profile website and mobile application outage for three hours. At first there was speculation of a cyber attack (they had reported a denial of service attack some months earlier). But, how frustrating it must have been for their spokesperson and management to say the cause was actually — IT maintenance:

“The outage occurred within seconds of a scheduled maintenance update, which we believe was the cause,” Times spokeswoman Eileen Murphy said.

As every I&O leader knows all too well, even when changes are well-intentioned, things break. Our IT environments are becoming more and more complex and the lines and relationships between this component and that one aren’t as simple as “the knee bone connects to the leg bone.” Often, there are multiple-degree relationships between components that are hidden from understanding and critical knowledge that isn’t documented anywhere but only resides in the heads of experts who may be on vacation, been promoted, left, or let go long ago.

Without a new approach, Infrastructure & Operations organizations will continue to struggle with the pace of infrastructure changes and will generate frequent, unacceptable service interruptions leaving everyone on the business side with a bitter taste in their mouths.

That’s where ITinvolve comes in, because we have taken a fundamentally different approach that combines knowledge, analysis, visualization, and collaboration in one solution designed for IT — to accelerate changes while reducing risks. Check out this quick video to see it in action.

With ITinvolve, you will:

  • Quickly understand and visualize the impact of IT infrastructure changes
  • Engage all relevant stakeholders to assess the risk of those changes
  • Ensure exactly the right information is delivered to those who need it when they need it

The net result?

  • Faster change execution
  • Minimization of business risk
  • Increased change throughput
  • Reduction in unplanned work from IT changes
  • Improved IT performance, reliability, and security (by adopting patches and upgrades more quickly)
  • Improved change success rate

Just experiencing one of these benefits should be worthy of a conversation with one of our IT collaboration specialists. Contact us to get the discussion going. Certainly, it’s better than the status quo.

Matt Selheimer
VP, Marketing