Contact ITInvolve
x 


Archive for the ‘Collaboration’ Category

Keeping Up in an On-Demand World

Thursday, January 29th, 2015

It’s a fact that business user expectations of IT continue to grow in today’s tech-heavy consumer culture. In a world where we can get access to new capabilities and services quickly in our personal lives, it’s no wonder that business leaders are seeking the same continuous delivery of new capabilities in their work lives.

Here are four tips that will help you adjust your culture and tooling for this era of on-demand IT.

Tip 1: Take notice of the level of collaboration between your company’s business unit managers and the IT department

Ask yourself, is either side pleased with the situation at present? I’ve seen companies invest in roles within IT to foster improved collaboration with the business (e.g. what ITIL calls Service Managers or what Gartner and others call Business Relationship Managers). This is a useful investment for IT organizations to make because it gives a focal point to work with the business, someone who can sit in executive meetings to understand what needs they have and problems they are trying to solve. In a lot of companies the CIO still tries to act as the “relationship manager” for every business unit and sometimes also the head of development tries to do so – these approaches just don’t scale effectively.

Tip 2: Do something every quarter to improve communication and collaboration between non-IT managers and the IT department

Standing still in this area means that communication and collaboration is likely eroding. Both the business and IT sides of the house are moving so fast that it requires a proactive communication and collaboration to maintain alignment. I hear a lot of CIOs talk about the need for an “open line of communication” with other departments and that’s a good mindset, but it’s not enough. We have to move beyond appealing to better communications and the need to align with the business. The question you should be asking is “what are some concrete actions I can take now to improve communication and collaboration between non-IT managers and IT?” One idea is the creation of relationship manager roles as mentioned above. Investing in good quality IT relationship managers and aligning up front on project scope is critical.

But even with that in place, challenges for communication and collaboration will persist. For example, if you’re relying on the relationship manager to translate and explain the business needs to those in IT who need to know about what the business is trying to achieve, the priorities, etc. there can be some big communication gaps because not everyone who needs to know gets the information, or, the business needs are changing so rapidly and people in IT are working with outdated information about business requirements. What’s needed is an ongoing dialog between not just the business and IT relationship managers, but also with project managers, developers, and even those in operations that need to deploy and run the applications.

There’s a lot IT can learn here from enterprise collaboration projects in the business (with products like Jive) and apply that to how IT works with the business. Imagine if the people working on the project in IT could “follow” and collaborate on business requirements with the business like you follow someone on Twitter or have a friend on Facebook. Followers could get updated as things change and engage with the business if there are questions or concerns. Maybe the development manager draws a cut line for the release and the business knows about that in advance and can give feedback on features that need to be added or confirm which others can wait. Perhaps there’s a policy that governs an app but operations isn’t aware of it and is going to deploy it in such a way that they would violate the policy – instead the enterprise governance team can know about it and weigh in before the deployment happens.

Tip 3: Revisit the tools and approaches you use for IT collaboration work today. Be intentional about your go-forward tools strategy

The challenge I see here (a lot) is that IT is still using the same techniques they’ve always been using for collaboration – meetings, emails, conference calls, sharepoint sites, spreadsheets. There is no substitute for meetings and face-to-face interactions and even conference calls are important, however, the challenge is how do we capture and disseminate that information so those in the meeting can refer back to it but ensure others that weren’t in the meeting can still have access to it? What about someone new joining the organization, how can they get up to speed faster without having to go to lots and lots of meetings?

IT needs a new way to think about how we capture knowledge and make it available to people in the context of the work they’re doing so they don’t have to go hunting for it on sharepoint sites, send out lots of emails, search knowledge bases etc. In effect looking for the needle in the proverbial haystack.

What we need in IT, and which we have been lacking, are cross-team workspaces. An area you could bring together the right people with the right tools and information in a workspace that was defined around the context of the activity that needs to get done – whether that’s a development project, an infrastructure upgrade, an incident that needs to be resolved, etc. And then help facilitate the team making the necessary decisions and documenting the actions that will be taken – while also notifying everyone who needs to know.

Tip 4: Accept that complexity is increasing and that your people are key to managing it not just automations

IT environment complexity is a major issue for many companies because their systems have now been linked together so that the user community can move from one system to the next easily and so that data is quickly passed between systems. So now when change comes in it can affect how multiple systems work together. As IT practitioners, we’ve been working so hard to support the business all these years and we now have a collection of lots of legacy stuff and new technologies and it’s all been woven together in a way to help the business as fast as possible.

There’s a lot we’d change if we could go back and do things over, but that’s just not practical, and so for the most part we need to work with the environments we have. The challenge is how do you understand all these integrations, relationships and dependencies, all the tribal knowledge that’s been built up in the IT organization over the years?

There have been several approaches to address this like Configuration Management Databases (CMDBs) and discovery tools, and they help, but they raise their own issues. First, there’s only so much that discovery tools can discover off the wire. They do a decent job of telling you how things are configured and relationships between them but they still miss a lot because they have to be programmed to find “patterns” and there’s no way they can discover things like policies and how those govern your assets.

The other big challenge for discovery tools is that they don’t capture intent – i.e. why things are the way they are. That’s tribal knowledge that’s in your people’s heads. Someone at sometime knew why SAP was configured that way or why a certain port was opened on that server or switch. The problem is that tribal knowledge isn’t well documented, it gets lost as people forget it or leave.

The complexity problem is really a tribal knowledge problem. What we need is a living, breathing CMDB, think of it like a “social CMDB” that leverages discovery tools but then uses crowd-sourcing and peer review, like Wikipedia, to validate what’s been discovered and fill in gaps on an ongoing continuous basis. Until we have this, IT is going to be very resistant to the pace of change the business wants, because we’ll be concerned something might break that we weren’t expecting.

This is another area where you can apply the cross-team workspace concept. The idea of not only capturing the tribal knowledge and continually validating the CMDB but then pushing that information forward in the context of planning a change or resolving an incident. So if people are following the things in the IT environment that they care about, when it comes time to work on a change, the right people can be brought together in a shared workspace (instead of guessing who to involve like in traditional change process management) and arm them with the right information and tools to provide their risk assessment. That way, when the change board goes to review the planned change, they know who’s been involved and what information they had access to and can feel a lot more confident about their decision and approve the change a lot faster to keep the business moving forward.

In summary

The fundamental business-IT challenge in a lot of companies is that the business is simply frustrated with the pace at which IT moves. Fostering good relations with business counterparts and investing in relationship managers as mentioned above is a good start. But having the business engaged in a shared workspace for projects they care about, giving them more transparency into the project and decisions being made about cut lines for releases or the like, will give them a greater sense of ownership and appreciation for the work we do in IT and how it’s not just ‘there’s an app for that’ in an on-demand world.

Matt Selheimer
Chief Technical Evangelist and SVP Marketing

Originally published at The ITSM Review

Collaboration built for IT

Friday, January 9th, 2015

Let’s be real. Nobody wants more email, conference calls, spreadsheets, document sharing sites, outdated visio diagrams, or knowledge bases to search

IT teams struggle to effectively collaborate because they don’t have a shared workspace to connect them with each other and the information they need.

That’s why we created the first application built for IT that engages the right people with the right information and tools to get IT work done more efficiently and effectively.

 

ITinvolve_workspaces-infographic

 

It starts with Activities. ITinvolve creates cross-team, in-context workspaces for Projects (like software development, application deployments and infrastructure upgrades); Process activities (like incidents, requests, and changes); What-if Scenarios (like impact analysis and data center moves) and Environment Analyses (quickly answering questions such as where are my single points of failure and which applications does a policy govern?)

Next, you need to get the right People involved. ITinvolve automatically identifies all relevant experts and stakholders, engages them in the workspace, and keeps them up-to-date on the activities they are participating in.

Then, you need to display relevant Information and Analysis. ITinvolve ensures the right information from disparate tools and sources is available to everyone using the workspace. We federate data with third-party information sources, provide direct integration to access additional details as needed, and eliminate time wasted looking for the proverbial needle in the haystack.

Finally, you need a way to Facilitate making decisions and to document planned actions. ITinvolve guides teams through their decisions and self-documents collaborations in the workspace. For example, you can assign and track team tasks and milestones; proactively engage experts and stakeholders to assess risks, validate requirements, and conduct root-cause analysis; you can even post and collaborate around automation tool actions.

It’s time for a new way of collaborating in IT. Improve your organization’s agility with Cross-Team Workspaces.

 

A look back and a look forward

Monday, December 29th, 2014

‘tis that time of year to look back on all that’s been accomplished and look forward to what’s in store for the next year.

As we reflect on 2014, there are quite a few achievements we are proud of. On the recognition front, ITinvolve was selected as the best-in-class solution for change, configuration, and release management in an independent review by The ITSM Review. We were also recognized as an emerging vendor by Computer Reseller News and were short-listed as the “Most Promising Start Up” by The Cloud Awards. Not bad for a company that was founded just a few short years ago in 2011, and building upon our recognition as a “Cool Vendor” by Gartner and another best-in-class award for Knowledge Management in 2013.

2014 was also a great year for product innovation at ITinvolve. In April, we announced new offerings to improve DevOps and IT project collaboration with ITinvolve Agility Manager and to increase understanding of and take actions to resolve configuration drift with ITinvolve Drift Manager. Along with ITinvolve Workspaces and ITinvolve Service Manager, we offer the most robust set of enterprise-class software built on the market-leading PaaS Salesforce1 – a fact that we are very proud of.

This year also saw the IT management software marketplace evolve as organizations expanded pilot DevOps efforts and came to realize that sharepoint sites, spreadsheets, and emails just don’t cut it for enabling enterprise DevOps collaboration. In fact, Gartner called out ITinvolve and only two other solution providers at the end of November as the only companies enabling the necessary “collaborative workspace to support I&O objectives.” And more broadly, the market category of Social IT, which we believe should be recast as Collaborative IT, has moved out of the “innovation trigger” phase (i.e. early adopters) and into the “peak of inflated expectations” (i.e. early mainstream) according to Gartner’s 2014 Hype Cycle for IT Operations Management.

So as we bid adieu to 2014, there is much to be proud of yet we are even more excited by what’s to come in 2015. If you are ready for a game-changing approach to managing IT that puts people at the center, then reach out and engage with us – we’re always interested in talking to transformational IT leaders.

Matt Selheimer
Chief Technical Evangelist and SVP Marketing

Knowledge is Power:

Monday, June 16th, 2014

A breakthrough approach for Change, Config, and Release

Today, we were thrilled to find out the ITinvolve has been awarded the best-in-class designation for Change, Configuration, and Release Management by the independent ITSM Review.

This award acknowledges what our customers already know and reflects the vision, innovation, and effort that ITinvolve’s R&D organization has put into building a breakthrough solution for a rather stagnant market. The result of a months-long evaluation that included extended demonstrations and deep dive Q&A with an independent expert, we are simply glowing at what they had to say:

“ITinvolve has taken huge strides in the ITSM arena with Service Manager by embracing the adage “knowledge is power”.  We feel that the developments that ITinvolve Service Manager has made with the fundamentals of knowledge and collaboration, ensuring that all relevant information is available to the right people at the right time (and in a straightforward way), enables risk assessment capabilities that far outweigh those of other ITSM solutions. This provides increased value to its Change, Configuration and Release capabilities.”

“The way that these capabilities support and mold Change, Configuration and Release creates a product that gives control, intelligence and awareness back to the IT organisation.”

“ITinvolve Service Manager is a progressive and ambitious product. Uniquely combining knowledge capture, analysis, and social collaboration, Service Manager proactively delivers timely and relevant information whenever needed.”  

“…regardless of the size of your organisation, we strongly believe that you can’t go wrong with considering ITinvolve Service Manager as your ITSM tool for Change, Configuration and Release.”

Learn more about ITinvolve Service Manager and sign up for a free trial.

Matthew Selheimer
SVP, Marketing

 

Improving Configuration Management: Getting Control Over Drift

Monday, April 28th, 2014

Configuration Drift poses a number of challenges to your IT organization and your business; for example the risk of non-compliance with security policies, performance and availability issues, and the failed deployment of new application releases.

To address drift most IT organizations have now employed some combination of scripts (or automation tools), a configuration management database (CMDB), and have defined a software configuration management approval process.  Despite these efforts, we find that configuration drift still occurs a lot in large enterprises.

Why is this the case?

First, if you are like most IT organizations, you probably follow the 80/20 rule with your administrators focusing 80% of their time on the configuration elements they consider most important to their roles and that leaves quite a gap where drift can still occur.  What’s more, if you are using scripts and automation tools to enforce configurations, it’s important to keep in mind these approaches rely on an explicit formula – meaning you have to specify exactly which configuration settings to enforce and when.  This leaves things pretty much wide open that settings you haven’t gotten around to specifying can be changed and additional software installed that might cause problems.

For example, let’s say that your security policy states that a certain range of TCP/IP ports should not be open on a certain class of servers.  You might reinforce this policy with an automation script that routinely verifies the port status and closes any ports in the range that may have been opened through some other means.  Sounds like you’ve got things covered, right?  Well, what if that port was opened as part of a change process to deploy a new application to one of those servers, and what if those working on the project knew nothing about the TCP/IP port enforcement script.  They deploy the new application, test it to make sure all is working well, and then send out the email to the user community letting them know the new application has been launched – a great day for IT enabling the business! Then, overnight (or the next time the script is scheduled to run), the port is closed.  Users come into work the next day and are unable to access the new application, calls start coming into your service desk, an all hands on deck meeting is hastily assembled, and, after some period of time, the port closure is identified as the issue and the port is reopened – only to have it closed again the next time the script runs – until finally someone realizes this script is the underlying cause (because probably the person who wrote it is no longer there and they didn’t document it other via a notation in an audit report that a script was the enforcement mechanism selected.)

Consider another example, where we have an application that has very low utilization most days except for spikes of activity at the end of each month (such as an application that accepts orders from a dealer network).  Let’s say an engineer is looking for some available equipment to install a new application on and identifies the same server running the dealer order system as a good candidate because of its strong specs and low average utilization.  He installs the new app and everything is working great until the end of the month when the dealer network comes alive with hundreds of orders every hour.  Now because we have two applications vying for the same physical components, we start to see performance issues and scramble move the new application to other hardware, taking it offline in the process, and putting it on an available server with lesser specs causing it to run slower than before irritating the user community even further.  In this scenario, your automation scripts would have done nothing to prevent this drift from the expected configuration (i.e. the dealer order system is the only application running on this box), because they would have no awareness that the new application even existed.  What’s more, automation could have actually made things worse if you had employed a strategy to periodically wipe and rebuild your machines (these are referred to as “phoenix servers” and it’s another strategy some have tried to reduce drift) – because, in this case, if you had followed such an approach your new app would have been erased from your data center entirely at the new rebuild.

So how can you get control over drift and avoid these sorts of issues?

First, the scripts and automations you have running need to be documented including what they do, when they run, and who is responsible for them.  With this information, you can make people proactively aware of any script and configuration conflicts as part of your change and release management process.  This will help you avoid the first example where the TCP/IP port was unexpectedly closed, because your team is more aware of and can account for the fact that there needs to be an exception to your TCP/IP port range – not only updating the script to reflect this but also documenting the exception proactively for your auditors.

Second, with accurate documentation about how your environment and key applications are configured, you can better understand why that dealer order system was running on equipment all by itself (because the tribal knowledge about the end of month peak loads was documented), and you can then also compare the current state against the expected state to identify drift issues and take action to address them as appropriate.  For example, you might trigger an incident and assign ownership to the relevant administrators who own the automations for that equipment and/or applications.

ITinvolve’s Drift Manager can help you implement both capabilities and more.  Drift Manager helps you document scripts and automations as well as “gold standard” configuration settings leveraging information you already have (via importing or federation) while also capturing the undocumented tribal knowledge and validating it through social collaboration methods and peer review.  Drift Manager also helps you compare the current vs. expected state in real-time and then facilitates raising drift incidents when required.  What’s more, ITinvolve helps you “broadcast” upcoming configuration changes so all relevant experts are included in your configuration management process and can fully assess the risk and implications to avoid the kinds of issues discussed above.  Finally, it ensures your teams are aware of the policies that govern your resources so that, as configuration changes are being considered, the potential policy impacts are considered at the same time.

No matter your approach, configuration drift will happen.  The question is, do you know about it when it happens and can you get the right experts quickly engaged to address it without causing other issues?

Matt Selheimer
SVP, Marketing

Merging Creation With Operations

Monday, April 21st, 2014

How facilitated collaboration enables continuous delivery for successful DevOps

by John Balena

So much has been written lately about the challenge of improving IT agility in the enterprise. The best sources of insight on why this challenge is so difficult are the CIOs, application owners, ecommerce and release engineering executives, and VPs of I&O, grappling to change their organizations right now.

At a conference I attended recently, I met two Fortune 100 IT executives from the same company: one the head of development and the other operations. Their story is emblematic of just how hard this is in the real world. As interesting background, both the development and operations leaders were childhood best friends, participated in each others’ weddings, and spend time together socially on an almost weekly basis – but by their own admission, even they couldn’t get effective collaboration and communication to work between their two organizations.

The lesson learned from this example is that the DevOps collaboration and communication challenge cannot be solved by sheer will, desire or executive fiat. Instead, you must breakdown the barriers that inhibit collaborative behavior and facilitate new ways of communicating and working together. The old standbys of email, instant messaging, SharePoint sites, and conference calls don’t cut it.

The challenge of two opposing forces: Dev and Ops

Imagine yourself helping your children put together a new jigsaw puzzle. Each time you turn your attention to a specific piece, the kids reorganize what you have already completed and they add new pieces, but in the wrong places. For sure, three pairs of hands can be better than one, but they can also create chaos, confusion and significantly elongate the completion of the puzzle.

The collaboration challenge in the DevOps movement is grounded in this analogy. How do you get multiple people working together across teams, locations, and time zones to build and get things deployed faster without chaos, confusion, and delay? How do you get these teams to speak the same language and collaborate together with a singular purpose when their priorities and motivations are so different?

Faced with this challenge, it’s easy to see why many organizations have stayed in their comfort zone of ‘waterfall’ releases and keep the number of releases per year small. The issue is this method isn’t meeting the demands of the business, the market and competition. As a result, more and more business leaders are going around their IT organizations. Options like public cloud, SaaS, open source tools, skunk works IT and outsourcing are making it easier for them to control IT decisions and implementations within the business unit or department itself.

So let’s dive deeper to understand the two forces at the heart of the issue: development (focused on the creation or modification of applications to support a business need) and operations (delivering defined services with stability and quality). It appears these forces are working in opposition, but both groups are focused on doing what leadership asks of them.

Developers tend to think their job is done once the application is rapidly created, but it’s not because no actual value has been delivered to the business until the application is operational and in production. Operations is severely disciplined when services experience performance and availability issues and they have come to learn that uncontrolled change is the biggest cause of these issues.  As a result, operations teams often believe their number one job is to minimize change to better control impact on performance and availability. This causes operations to be a barrier to the rapid change required to give the business the speed and agility it needs.

Critical to enabling DevOps is an explicit recognition of this situation and the ability to link discrete phases of the application development and operations lifecycle to enabling ‘fast, continuous flow’ – from defining requirements, to architecting the design, to building the functionality, to testing the application, to deploying the application to both pre-production and production environments and to managing all the underlying infrastructure change required for the application to operate efficiently and effectively in all environments.

Why current approaches don’t work

There are several challenges in achieving this ideal.

  1. Developers hate to document (can you blame them?), and, when they do, their communication is in a context they understand, not necessarily empathetic with the language that operations speaks. The view from operations is that the documentation they receive is incomplete, confusing, and/or misleading. With the rapid pace of development, this challenge is getting worse with documentation becoming more and more transient as developers “reconfigure the puzzle” on the fly.
  2.  Today’s operations teams typically take responsibility for production environments and their stability. That means there is usually a group wedged in between the two – the quality assurance (QA) team. QA’s job is to validate the application works as expected and often they require multiple environments for each release. This group is typically juggling multiple releases and is, in essence, working on and reconfiguring multiple puzzles at the same time. The challenge of keeping QA environments in sync with both in-process releases and production can be maddening (just talk to any QA leader and they’ll tell you first-hand). The documentation coming from development is inadequate, and the documentation coming from production is often no better, since most operations teams store much of their most current information about configurations in personal files or simply in their brains.
  3. The ad hoc handoffs from development to operations and QA take time away from development’s primary mission: creating applications that advance the business. Some suggest developers should operate and support what they code in order to reduce handoffs and the risk of information distortion or loss. A fundamental risk with this approach is opportunity cost. Does a developer really understand the latest and greatest technology available for infrastructure and how to flex and scale those technologies to fit the organization need? Do you even want them to or would you rather they be coding instead?
  4. Others have suggested that Operations move upstream and own all environments from dev to QA to production, and treat configuration and deployment scripts as code just like a developer would. This may sound like a good option, but it can create a constraint on your operations team and cause valuable intelligence to become hidden in scripts. A particular application deployment could have one or more software packages required and potentially hundreds of different configuration settings. If all that information is embedded in a script, how will other team members know this if they go to make a change to the underlying infrastructure to apply a security patch, upgrade an OS version, or any of the other changes made in IT every day?

Real DevOps transformation doesn’t mean that you give everyone new jobs, instead, it’s about creating an environment where teams can collaborate together with a common language and where information is immediately available at the point of execution and in a context unique to each team.

A better way forward?

In The Phoenix Project, written by DevOps thought leaders Kevin Behr, Gene Kim and George Spafford, the authors promote the need for optimizing the entire end-to-end flow of the discrete processes required for application delivery – applying principles as they achieved agility in the discrete manufacturing process.

Manufacturing in the 1980s resembled IT operations today, employing rigid silos of people and automation for efficiency and low cost, but this became a huge barrier to the agility, stability and quality the market demanded. They learned if you optimize each workstation in a plant, you don’t optimize for the end-to-end process. They also learned that if quality and compliance processes were ancillary to the manufacturing process, it slowed things down, drove up costs and actually decreased quality and compliance.

Successful manufacturers brought a broader view and optimized end-to-end flow rather than operate in a particular silo. They also brought quality and compliance processes inline with the manufacturing process. By addressing quality and compliance early in the cycle and at the moment that an issue occurred, cycle times decreased significantly, costs plummeted and quality and compliance increased dramatically.

These same principles can be applied to IT resulting in:

  • faster time to market;
  • greater ability to react to competitive pressures;
  • deployments with fewer errors
  • continuous compliance with policies; and
  • improved productivity.

DevOps can best be realized when IT operates in a social, collaborative environment that ensures all groups are working with a visual model in their context with the necessary information from downstream and upstream teams, as well as in collaborating with relevant experts at the moment when clarifications are needed or issues arise.

To merge creation with operation, the core idea behind DevOps, requires a cultural change and new methods in which cross-functional teams are in a state of continuous collaboration as they deliver their piece of the puzzle at the right time, in the right way, in-context with the other teams in other silos. Operating something that never existed before requires documentation so that operations teams have the information they need to manage change with stability and quality.

With more modern collaboration methods, self-documenting capabilities are now possible as development, release and operations do their respective jobs, including visualization of documentation with analytics and with the perspective and context each team needs to effectively do their job downstream. These types of capabilities will transform organizational culture and break down barriers to collaboration that impede agility, stability and quality.

Is this simply nirvana, and unachievable in the real world?  No. Manufacturing achieved the same results by applying these principles; the fundamental point being made in The Phoenix Project.

The goal is not to write code or to keep infrastructure up and running, or to release new applications or to maintain quality and compliance. Instead, the goal is for IT to take the discrete silos of people, tools, information and automation, and create a continuous delivery environment through facilitated collaboration and communication. This will drive the cultural and operational transformation necessary to enable IT to respond to business needs with agility while ensuring operational stability and quality.

John Balena is senior vice president of worldwide sales and services at Houston-based ITinvolve. He formerly served as the line of business leader for the DevOps Line of Business at one of the “Big 4” IT management software vendors.

 

 

Create Your Own DevOps Movement

Monday, March 17th, 2014

Harnessing the power of collaboration to enable a DevOps-driven IT organization.
by Cass Bishop

I love tech tools.  During my career I have worked for and consulted with many companies, and every time I begin a project I immediately look for tools or frameworks to help me complete things faster.  For a guy obsessed with new tech tools, now is a great time to be in IT.  Git, JIRA, Jenkins, Selenium, Puppet, Chef, Bladelogic, uDeploy, Docker, and Wily (just to name a few great tools) are providing IT with a big box hardware store full of tools designed to help solve technical problems. These tools are variously pitched, sold, praised and cursed during DevOps initiatives – primarily because they are good enough for most needs but still leave some critical gaps.

With such a list, you can try to check off all the items listed in one of those “X things you need for DevOps” blogs that are published almost daily. “Continuous integration…check.  Automated testing…check. Continuous delivery…check. Automated configuration management…check.  Application Monitoring…check.  So now can I say DevOps…check? ” You probably can’t check that box and I would argue you never will with the above list of tools because, unless your IT department fits in one room and goes for beers together every Thursday, you are missing the most important concept of DevOps: the need for continuous collaboration about your applications in all of their states from development to retirement.

Most organizations I have worked with aren’t even close to this level of collaboration across development and operations. They are often dispersed across the globe working in different chains of command with different goals. How does a developer in Singapore collaborate with an operations team in Atlanta? Shouldn’t the incredible number of tools in our arsenal be enough to fix this? “We’ll give the operations team accounts in JIRA, Jenkins and Selenium then give the developer access to Puppet, Wily, Splunk and the production VMs. They can send each other links and paths to information in each of the different tools and they can collaborate in email, IM, conference calls, and a SharePoint site.” Sounds ok until you realize that each of the email threads or chats, which is filled with useful information, gets buried in employee outlook folders or chat logs. Also, when was the last time you heard someone ask to attend yet another conference call or use yet another SharePoint site?

“Maybe we should have them save the chat logs and email threads in the Operations Wiki, the Development Confluence site, or that new SharePoint site?” With these kinds of approaches, you can find the threads based on string-based searches, but anyone reading them has no context about how all of the data points in the discussion relate to actual applications, servers or any other IT asset. In addition to the lack of context, your IT personnel now spend their days hunting for a needle in an ever-growing haystack of data generated by those amazing tools.

What if, as the now-familiar adage goes, there was an app for that.  An application designed  to help bring all this disconnected data together, make sense out of it, display it visually, and had social collaboration built in.

With this kind of application, when your middleware admin needs to discuss a problem with the UAT messaging engine, she can now do so in context with the other experts in your organization. Her conversation is saved and directly related to the messaging engine. If the conversation leads to fixing an issue, the lesson learned can be turned into a knowledge entry specific to messaging engines. Now any IT employee can quickly find this knowledge and see who contributed to it the next time there is a messaging engine issue.

When developers want to collaborate with Sys Admins about higher memory requirements for their application due to a new feature, they can pull them into a discussion in the feature’s individual activity stream. The admins are alerted that they have been added to the conversation by their mobile devices and they contribute to the activity stream and can even add other participants , like the operations manager, so he can weigh in on the need for devoting more memory to the correct VMs.

No tool or application can drop in a DevOps culture for your organization – that must come from within, but there are now applications available that provide the data federation, visualization, and contextual collaboration capabilities necessary to help enable cultural change so you can create your own DevOps movement in your organization.

 


Cass Bishop is Director of Consulting Services at Houston-based ITinvolve (www.itinvolve.com). He has been a middleware, automation, and DevOps practitioner for nearly twenty years and has worked on projects in some of the largest IT organizations in the US.

 

What’s Your Discipline?

Friday, March 14th, 2014

For those of us working in IT, it’s pretty impressive the list of disciplines that have been defined over the years to manage an IT organization and the functions it’s responsible for.

In response to this, some vendors have built different applications for every discipline, which creates or reinforces silos in processes and organizational entities. At ITinvolve, we’re interested in a different approach; an approach that is grounded in information and knowledge sharing, that reinforces collaboration across silos, and arms IT knowledge workers with the information and analysis they need to do their jobs more effectively without having to go hunting for information across applications.

Our approach takes the elements you manage in IT every day from requirements to releases, servers, network devices, firewalls, policies, and much more and puts the information you need in context of those elements and the relationships that bind them together.

In this way, the information you need can be accessed and managed from multiple perspectives. For example:

  • Show me all applications that are governed by our PCI policy
  • Which requirements made the cutline for the next release?
  • If I take this server offline to upgrade the OS, what will be impacted?
  • Who is responsible for this middleware component?
  • Let me see how many instances of that database version we have running production
  • What’s in our service catalog?
  • Is there a known workaround for this issue?
  • What parameters are in this automation?
  • What are the fragile settings for this application?

This is a very small sample list of the powerful types of questions you can ask and the information freely available at your fingertips in ITinvolve.

While ITinvolve doesn’t cover every discipline in IT, we do support quite a few of the disciplines you will find in any good size IT organization.

Check them out:
Disciplines We Support

Matt Selheimer
VP, Marketing

Agility With Stability

Friday, February 28th, 2014

Earlier this week, I attended Gartner’s CIO Leadership Forum in Phoenix, Arizona. This event drew 600 CIOs from the US and Latin America as well as a few from “across the pond.” Last week, I attended CIOsynergy Atlanta which drew more than 150 CIOs, CTOs, and VPs of IT from across the Southeast US. At both events there was a strong desire and great interest in how IT organizations can achieve greater agility while ensuring the stability their businesses also demand.

The challenge of agility with stability expressed itself in different ways depending upon the industry and culture of the IT organization. For example, in Atlanta, I spoke with the head of mobility for a major US department store who is focused on enabling greater agility in the consumer mobile experience but is challenged by the integrations required to legacy systems and PCI requirements. Another IT leader in Atlanta, working at a major hotel chain, said he felt like he had the PCI challenge under control but struggled to avoid unforeseen impacts from IT changes. One of the CIO panelists, who heads up IT for a multi-billion dollar heavy manufacturer, described her agility challenge in this way, “We need to do a much better job of documenting the spaghetti we’ve built up in IT; we need a living picture of all the relationships that tie our systems together.” It was this lack of documentation and understanding of dependencies that she felt was the critical challenge holding her back in transforming her IT organization to be more agile.

In Phoenix at the Gartner CIO Forum, I spoke with the CIO of a large regional university. He said that he had a very entrenched culture in his IT organization and was going to follow Gartner’s recommendation for “bi-modal” IT and set up a separate team chartered with driving agile development projects while ensuring the existing operations team knew how their day-to-day work in “running the business” was equally critical to the university. I also spoke with the CIO of a major electronics manufacturer. She had grown up within the IT organization and knew first-hand how entrenched behaviors and tribal knowledge were major risks to her evolving to a more agile organization.  The CIO of a major international financial services company put it this way, “I have 3,000 batch jobs and do not know exactly what they do, what applications they support and connect to, and who is responsible for them.”

I could go on with more examples, but this is a pretty good microcosm of the challenges facing the CIO today when trying to deliver greater agility while ensuring operational stability. What I take away from both events and the dozens of conversations I had is that today’s enterprise CIOs know they need to be more agile but are genuinely concerned about how that will disrupt service delivery. It seems to be a no-win situation – if you don’t move faster, IT is a bottleneck; and if you do move faster and break things, IT is unreliable. What’s a CIO to do?

At ITinvolve, we’ve been working on this problem for nearly three years now. Actually, these challenges aren’t really brand new and we’ve been thinking about them since before the company was founded. That’s what led us to create a new IT management software company – a company dedicated to getting to the heart of the matter and solving this challenge. We believe today’s CIO needs to provide their organization with an application that brings together and proactively shares the collective knowledge within the IT organization (both systems-based as well as tribal), offers robust and highly-visual analysis of upstream and downstream impacts (not constrained by hierarchical dependency maps), and facilitates collaboration among the relevant IT experts and business stakeholders.

With such an application, IT organizations can be more agile while avoiding unexpected outcomes that disrupt the stability and performance of services to the business. Most CIOs don’t think this is possible and are genuinely grappling with how to deliver this seemingly paradoxical agility with stability that the business demands. Until they meet ITinvolve, and they see how it’s possible to move faster, be more nimble, and still deliver reliable services to the business.

The secret, if there is one, is People Powered IT, and only ITinvolve has it. See how it works for yourself.

Matt Selheimer
VP, Marketing

5 Problems With the IT Industrial Revolution

Wednesday, January 22nd, 2014

Over the last several years there’s been lots of talk about the need for an ‘industrial revolution’ in IT. We’re actually pretty big fans of the metaphor here at ITinvolve.

I think it’s well accepted that IT needs to improve both its speed of service delivery and quality. These are classic benefits from any industrialization effort, and they both create ripple-effect benefits in other areas too (e.g. ability to improve customer service, increased competitiveness).

But despite all the talk and recommendations (e.g. adopt automation tools, get on board with DevOps), there are five common problems that stand in the way of the IT industrialization movement. A recent Forrester Consulting study commissioned by Chef gives us some very useful, empirical data to call these problems out for action.

#1 – First Time Change Success Rates aren’t where they need to be. 40% of Fortune 1000 IT leaders say they have first time change success rates below 80% or simply don’t know, and another 37% say their success rates are somewhere between 80% and 95%. You can’t move fast if you aren’t able to get it right the first time, because it not only slows you down to troubleshoot and redo, but it hurts your other goal of improving quality.

#2 – Infrastructure Change Frequency is still far too slow. 69% of Fortune 1000 IT leaders say it takes them more than a week to make infrastructure changes. With all the talk and adoption of cloud infrastructure-as-a-service, these numbers are just staggering. Whether you are making infrastructure changes to improve performance, reliability, security, or to support new service deliveries, we have to get these times down to daily or (even better) as needed. There are a lot of improvements to be made here.

#3 – Application Change Frequency is just as bad. 69% of Fortune 1000 IT leaders say it takes them more than a week to release application code into production. Notice that it doesn’t say “to develop, test, and release code into production.” We’re talking about just releasing code that has already been written and tested. 41% say it still takes them more than a month to release code into production. Hard to believe, but the data is clear.

#4 – IT break things far too often when making changes. 46% of Fortunate 1000 leaders reported that more than 10% of their incidents were the result of changes that IT made. Talk about hurting end user satisfaction and their perception of IT quality. What’s worse, though, is that 31% said they didn’t even know what percentage of their incidents are caused by changes made by IT!

#5 – The megatrends (virtualization, agile development, cloud, mobile) are intensifying the situation. As the report highlights, these trends “cause complexity to explode in a nonlinear fashion.”

So what can you do about this if you believe that “industrialization” and, therefore, automation is the answer (or at least a big part of the answer). Well, first, you have to make sure your automation is intelligent – i.e. informed and accurate. Because we all know that doing the wrong things faster will make things worse faster.

This is the problem we’re focused on at ITinvolve; helping IT operations and developers – by giving them the knowledge and analysis they need, then facilitating collaboration to validate accuracy. Good automation must be driven by a model that fully comprehends the current state of configuration, the desired state, and the necessary changes and risks to get there. It’s only when armed with this information, can automation engineers effectively build out the scripts, run books, etc. to deliver agility with stability and quality.

Matt Selheimer
VP, Marketing

 
Published in APM Digest: