Those of us in the end-user computing (EUC) space deal with the consequences of legacy systems and processes every day. It’s an ongoing effort to figure out what’s out there, how it’s performing, who is using it, and what their end-user experience is like.
Ultimately this process can make understanding IT of the past much more fraught than planning for IT of the future. But what if we could move that needle to make space for digital innovation? That would require a more efficient way of managing and modernising legacy hardware and software. To do so, we need to achieve a very specific understanding of what value our current technology is bringing to end users and the business.
It’s at this point that most feel overwhelmed or sceptical. So, let’s get one thing straight: no IT environment is special. Nor should you want it to be. Similarities help you see what works and what doesn’t and drive competitive edge for your organisation by providing superior service to other shops.
Yes, IT Is Mostly All the Same
As a professional in the IT industry for over 25 years, I’ve seen first-hand how technology has shaped the high-velocity, information-saturated world of today, both at home and in the workplace. Because of these changes, many of the problems IT faces can appear insurmountable—the result of years of accumulating products and policies while juggling security and users’ expectations.
When I discuss pain points with clients, often people are resigned to accommodate ‘the way things are’. But this narrow view underestimates IT’s power to effect change, so I work to help people find a new vantage point—to see the forest for the trees, so to speak.
Collective intelligence benchmarking tools can give you detailed insight on this, but fully understanding your environment requires at least a basic awareness of what IT looks like at other organisations.
If you think about it, the base architecture and platforms most organisations use are very similar:
Servers and user devices are connected with a network
Datacentres house the central information, secure and protect it, and make it available to those authorised to consume or update
Only a handful of mainstream operating systems exist; often just one or two dominate
Usually, central services are virtualised in some way, abstracting them from the hardware, to optimise assets, availability and management
The generalisation above applies to the vast majority of organisations of any size on the planet, so what is it that makes one so different to the next?
Two things: the first is an organisation’s mix of applications and the second is something I call the ‘mesh’. That is, all the customisations, dependencies, interactions and interfaces (both internal and external) that evolve over time. The mesh expands over many years, connecting and binding requirements together into a complex organism that becomes an organisation’s IT.
No wonder IT is perceived as elaborate, stressful and difficult to support, requiring very clever people to keep the cogs turning and lights on! Often, the people who implemented much of the mesh have retired or forgotten critical knowledge, forcing current teams to navigate legacy IT like a minefield.
This leaves IT pros with three options:
Charge ahead without clear understanding of whether legacy systems are still in use and how they relate to the rest of the environment (the outcomes of this approach are regularly seen in the news)
Tiptoe around any change while supporting systems that may no longer be adding business value and dragging out the length of a project
Identify a way to gain visibility into the mesh to address it head-on while minimising any negative effects
Of course, visibility sounds great, but as we recently discussed (see: “Satellite vs Probe: Choosing Your Ideal Digital Experience Monitoring Tool”), many monitoring solutions gather data from the outside-in, meaning that they can’t tell you much about the actual health and usage of a machine from the end-user perspective. And understanding that perspective is key to uncovering the value of a legacy system to the user and organisation, which will inform how (or if) you choose to modernise it.
An End-User Computing Approach to Managing Legacy Systems
Building a complete and accurate picture of the environment with endpoint data is key to effectively managing all hardware and software, from the dustiest server to the shiniest SaaS application. In turn, this depth of knowledge will allow IT to act with greater agility and make more informed decisions about when and how to execute desktop transformation projects, such as an upgrade from Windows 7 to Windows 10.
In order to be complete and actionable, that picture has to include data on all usage, consumption, dependencies and interactions in your environment.
Let’s consider an old server running an ancient application:
Is it critical?
Who exactly uses it? From where? What is that user’s role?
How frequently is it actively used?
What are the application’s dependencies?
Is an application working optimally? Or is it crashing and faulting?
What is the root cause of the application’s performance problems?
What version of the application is running? Does it pose a security risk?
What is the productivity impact to users and the business?
The answers to these questions provide a much more comprehensive and complete insight, enabling management of this legacy operation. By using data to quantify and inform decisions, IT can support, rationalise or retire systems while understanding the potential impact on user and business.
Using SysTrack to Transform from Legacy to New Technologies
SysTrack has a proven track record of enabling organisations to assess, quantify and accelerate the process of updating legacy systems.
The extremely granular, time-correlated analytics that SysTrack provides on and around applications is useful for a multitude of use cases besides the operational management scenario described above. Once support costs, risk and other factors are weighed, modernising legacy IT is often the next logical step. Examples of modernisation include desktop transformation projects such as VDI and OS upgrades. Post migration, SysTrack enables you to closely monitor for changes to ensure a positive impact on performance and end-user productivity.
Our partners and customers also use SysTrack’s toolset to model target architectures precisely based on a specific platform design, considering hardware, hypervisor, software rationalisation and consolidation, application components and dependencies, layering, gold images and more. This is what I used to do with SysTrack a few years ago when I headed up a transformation consulting practice within a UK-based virtualisation consultancy.
Software licensing can be a minefield to navigate, too. Some license models are based on installed, used, concurrent instances or CPU sockets for example. With SysTrack’s analytics providing all of this detailed information presented in a relevant and useful form, SysTrack informs decision-making based on empirical data from real users, de-risking the process and giving a high level of confidence and visibility previous unobtainable.
This demo video offers a quick look at how SysTrack can help you manage applications in your environment:
Lakeside Software is a leader in cloud-based digital experience management. Our team of experts explores the latest Lakeside features, digital employee experience strategies, and industry trends to provide readers with best information on end-user experience management, digital workplace optimization, IT asset rationalization, remote work management, proactive service desk operations, and other IT initiatives.