3 Ways to Leverage User Experience Monitoring You May Not be Using Today

I’m a strong advocate of User Experience monitoring, and spend a lot of time on the subject when talking with FireScope customers.  My biggest argument for using it is this; just because you’re monitoring your servers, applications, storage assets, network devices, virtual infrastructure, etc. through FireScope and everything’s showing green lights and good performance curves, you have no guarantee that your customers aren’t experiencing problems or poor performance.  Only by directly testing the most common use cases and evaluating every step can you be sure that the service itself is operating normally.

But in the course of hundreds, if not thousands, of discussions, I’ve heard some of our customers come up with some particularly interesting ways to make use of User Experience Monitoring in FireScope that I hadn’t quite thought of.  Today, let’s explore three.

Verifying SaaS Providers are Delivering on SLAs

There’s a growing trend in shifting critical business applications from locally hosted and maintained to cloud-based SaaS models.  And while its certainly less expensive to have your CRM, ERP, Helpdesk and other applications to the cloud, your IT department will still be the first group called any time there’s an issue (even if the sales team went out and bought it without IT involvement).

Many of our customers often put up flat screens in communal areas such as right outside the helpdesk or in break rooms that use our Slide Show capability to automatically rotate through dashboards displaying service health.  Employees can quickly see service health at a glance, which leads to a reduction in calls to the helpdesk.  It also gives IT’s organizational credibility a huge boost as it provides transparency with the rest of the business.

I bring this up because one of our customers migrated their CRM to a SaaS provider.  The CIO thought that this approach would effectively relieve himself of responsibility, but quickly found that his helpdesk was still getting calls every time a problem occurred at the provider.  His solution, a User Experience Check (UEC) in FireScope that logged into the CRM, ran a business critical report and created and deleted a test contact.  By adding the status of these checks to the communal dashboards, users could quickly see that the provider was suffering issues and therefore refrained from calling the helpdesk about a problem that was already known.  But that wasn’t the end of the story.

At the end of their first year with the new provider, FireScope’s SLA reporting highlighted that the provider was failing to meet their SLA obligations.  Armed with this information, the CIO was able to get significant concessions from the provider.

 

Multi-Site Testing of User Experiences

A large Financial Services firm was using FireScope to monitor, among other critical services, a loan processing platform that was being used by offices across the US.  Prior to implementing FireScope, they were routinely having panic drills such as users in their Chicago office reporting being unable to log into the system, San Francisco users reporting performance degradation that was making it virtually impossible to get work done, and the Cincinnati office reporting that report generation was timing out.

User Experience Group example in FireScopeBecause FireScope’s User Experience Checks’s are executed by the FireScope Edge device they are associated with, this organization was able to create a single UEC testing common use cases and then clone it to be executed by edge devices residing in each of their offices.  Furthermore, by grouping all of these into a User Experience Group, they were able to compare how these UECs were running from each business location in an apples-to-apples way.  When individual locations were showing abnormal performance performing the checks, they knew to start inspection of the local network and wan connections.  When all locations were showing abnormal performance, they knew to start looking at the corporate datacenter.

Over the course of the next month, they identified numerous issues with broad band providers that were impacting individual business units, as well as three occasions where bad code was introduced that affected the entire business.  Just in terms of alleviating all of the time previously spent narrowing issues down, FireScope paid for itself.

 

Integrating User Experience Monitoring with DevOps/Agile

Defining macros in User Experience ChecksFireScope User Experience Checks have a Macro capability that simplifies configuration of multiple, similar tests.  Basically, macros can be defined such as [Host], with a given value [www], and each step can use the macro in any field, such as the url [Host].corporateapplication.com.  This was particularly useful by one FireScope customer who was adopting an Agile development methodology for their most critical customer-facing application.

Prior to FireScope, instrumentation of this critical application seriously lagged behind implementation.  They had a 3 week update cycle, but it took 2 weeks to update their monitoring tools every time they pushed a new release to production.  And because of the effort involved, only the production version of the application was ever monitored.

The cloning and macro capabilities I previously discussed enabled them to build out a User Experience Check in the final stages of development, offering early visibility of each new build.  As the build migrated through load testing, security sign off, QA, Staging and finally Production, the same UEC followed the build by simply changing the value of the macros.  This helped them spot issues earlier in the development process and when issues cropped up during the hand off between states.  Furthermore, by maintaining historic trends in performance, they were able to do some interesting analysis comparing different iterations of the application to identify the actual improvements achieved in code changes.

 

Now, imagine what you could do…

 

Leave a Reply