Tales of a CCaaS Migration That Looked Right Until We Turned It On


Welcome guest contributor Jeremy Markey! For more than two decades, Jeremy Markey has worked in contact centers, workforce management, service operations, and transformation. He has spent much of that time helping organizations improve service, simplify the operating model, and make smarter decisions about technology, including evaluating, selecting, implementing, and operating CCaaS platforms from the customer side.

When you buy a CCaaS platform, it feels like you’re just buying software. But every time I’ve done it, I’ve realized I was actually buying something much bigger: new processes, new constraints, and a vendor partnership that could make my life dramatically better… or painfully worse. 

What you’ll find in this article is equal parts mess, untangling, and lessons learned. Long days and longer nights have been spent learning these lessons over my multi-decade long career in the CX/CS industry. I’ve learned these lessons after dozens of vendor migrations, system rebuilds, and RFPs.

One thing we should try to always remember, we’re not buying new software. We’re buying what we think our organization can survive, support, and sustain. I’ll be discussing specific vendors here, but you’ll notice before too long that names have been changed to protect the innocent.

Setting the stage

Our organization had a great long term relationship with our existing contact center provider, lets call them Bridge Four Connect. But the solution we had was dated, didn’t support the latest features, wasn’t cloud based, and required a lot of in-house expertise to maintain.

Before I joined the company, the decision was made to migrate to a modern platform. During the RFP process, the existing vendor was the favored one and had a brand new cloud based CCaaS platform. But a new contender in the space, we’ll call them Gondor Communications, wowed everyone with their next generation CCaaS platform. The current state of both platforms seemed in parallel, but Gondor Communications had the most exciting roadmap. I was told, as new players in the space, we were promised to have a big say in how that roadmap worked out.

By the time I joined the company, the migration was already behind and the heat was rising. But it wasn’t my problem, at least not yet. Those who started this transition were to see it to the end. Then another delay was needed which meant extending our contract with Bridge Four Connect, so I was asked to step in.

As I started peeling back the onion layers, the predominant theme was we thought we were buying better technology. But instead, we were buying operational constraints we didn’t understand, technology dependencies we hadn’t fully vetted out, and tradeoffs to our workflow we couldn’t accept.

What broke first

Once I was involved in getting the project back on track, I found that multiple different business units had completely different, and sometimes contradictory, expectations of the system. Because different business units wanted different things, our systems integrator was effectively building several versions of the same system, when we only had time to build one version realistically.

As we worked through these pains we were finally ready for our first UAT (user acceptance testing) and I was excited. The product finally worked, the screens looked great, we had something we were proud of. But that moment didn’t last. Our users voiced several issues, a few being game stoppers. The top one was a change to workflow. With Bridge Four Connect, our Supervisors had the ability to call an agent in unavailable and the agent’s phone would ring. So if the agent was on break and stepped away, or was doing some back office work with their headset off, they’d hear the ring and be able to answer it and talk to their Supervisor.

Unfortunately our integration with Gondor Communications would not support that functionality. We brought the issue to them, and the systems integrator was confident they could get us a like-for-like solution. Months later the answer was “Just send them an IM instead”, which unfortunately we had to accept among multiple other significant changes to our workflow we hadn’t signed up for.

With all that done, we were finally ready to start migrating teams. All the prep was done, UAT was accepted, synthetic workloads all came back clean. The first team migrated without a hitch. We had a winner on our hands! 

Then we migrated the second team. At first, everything was humming along as planned. But then volume picked up as we got busy and voice quality issues starting popping up. The delay from when someone would talk and the other side hearing it was getting wider, some callers sounded like they were underwater, and before long not a single call was passing. Thankfully, we had a rollback plan, so we rolled back.

What I just described happened several times over several weeks and we replaced a ton of networking hardware. Our new vendor, Gondor Communications, was convinced the problem was our aging internal infrastructure. But once that was all upgraded and the same problem existed, we had enough. We escalated, got one of their senior executives on a technical call who then got a senior executive with our LEC (local exchange carrier) on the line.

We learned all of our data was being passed over a 5G hop. With our old (mostly) on-prem solution, voice coming in over voice lines instead of data lines was no problem. But when we started pushing all the voice data over that 5G hop, it just did not have the bandwidth to keep up. One thing I still don’t understand today, is how did this pass the synthetic testing Gondor Communications was so proud of!?

Once our data was re-routed using only lines that met our bandwidth requirements. We were able to transition every agent to the new system in a few weeks. And I’d love to tell you it was happily ever after from there on out. It wasn’t.

What took so long to untangle

After rollout, we noticed our customer effort score (CES) started to take a hit. Normally when we see a dip it can easily be related to a few dispositions, or maybe our speed of answer was getting lengthy and frustrating customers. But neither were true. Escalations weren’t up, we didn’t have any product issues. We were left scratching our heads and chasing our tails. What is going on?

The whole time we missed an increasing background noise, complaints that emails aren’t getting answered. It was in the data but we just didn’t pick up on it. One thing I’ve done for a very long time is listen to calls/read chats and emails. It’s not part of my job requirements, nobody expects it of me, but I find I learn a lot more from front line interactions than reading reports only. And I listened to a call where one of our agents was “rightfully” defending us against a customer who was livid we weren’t answering emails. And fortunately the customer said some things that crossed a line.

I took it upon myself to dig in and prove the customer wrong. My plan was to work with my boss and get the customer on the line and have at it. It’s my way of defending the agent who had to unjustly take all this flack. But when I reached out to IT they said they’d been noticing a backlog of emails not migrating from our email provider, to our CRM. When we looked, it was thousands of emails.

The process we built meant that an email would come into our email provider and automatically get picked up by our CRM. Then our agents only had to interact with emails inside the CRM. But if the email never made it to the CRM…

With this knowledge IT escalated the issue with our email provider. Several long troubleshooting calls later, it wasn’t our email provider’s problem at all. The API process setup with Gondor Communications was causing us to exceed our API call limit with our CRM. Once that happened, anything after that within the same business day was denied. While we were having several other data integrity issues we were still chasing down, this was the smoking gun that impacted everything.

We had to completely rebuild the integration between our CCaaS provider and our CRM from scratch. And we had to do it without a systems integrator, budget, or time. It had to be done while doing our day job. And to ITs and our Admins team credit, they got it done in record time!

Once we migrated, over a few weeks our CES results returned, and the noise around not answering emails went away. The takeaway was that multiple overlapping problems made diagnosis slow, political, and sometimes expensive. Particularly while we all had day jobs to get done at the same time.

What I learned along the way

Do not buy the roadmap

What is coming doesn’t matter when buying a product. Buy a product for what it can do right now. For all you know this will be the best it will ever be. To be fair to Gondor Communications, once they solved their stability issues they did get back to adding a lot of great features. Features we no longer had time to implement as we were busy migrating to another new CCaaS provider, we’ll call them Rocinante Contact Center Systems. Spoiler alert, we implemented the lessons learned and it was an overall fantastic migration that was on time and on budget!

Ensure your build team has hands-on time

No matter who the vendor is or how well other organizations rate them, whatever they demo will either be in a clean room in near perfect circumstances, or from another customer whose operations are different enough not to be applicable. The only proof is in the pudding, you have to get your build team hands on time. Boil the RFP down to two or three finalists and then get those finalists to set up test environments and have your build teams go at it for a few weeks. Have them build as much as they can in that time towards your requirements and score the results. In fact, make this a requirement in your RFP. 

New systems change the work itself, not just the screens

Once we got into UAT, we found we were not even close to ready for the changes needed. While here I only outlined one change to our workflow, in reality we changed almost the entire workflow for our agents and supervisors. It took months of training and retraining. My team and others lost a lot of bandwidth getting folks up to speed. We also had to rebuild our entire reporting suite as none of what we had before would work with the new system. That took time and training. Bottom line, the cost of change was significantly more than we had budgeted for.

But with the migration to Rocinante Contact Center Systems, we had accounted for it. We brought the training department along for the ride, UAT was part of the evaluation/taste test process for selection and continued throughout, and we held hands with the other business units within the company from day one so we were all in lock step throughout the transition.

Integrations and edge cases are often the part that breaks everything

It’s not sexy, it’s not going to enhance the customer experience. It’s the kind of problem that throws everything off right away and if it’s not handled, the negative effects can drag on for years.

Had we done a taste test AND included those who would be part of the UAT and other internal business units from the beginning, we could have avoided all this pain. We also would have learned neither of the two finalists were good business fits for us. Hindsight being what it is, we should have extended with Bridge Four Connect for another year on the old platform and continued diligently looking for the right partner.

Finding the right partner

Pick the partner who helps you understand the whole change, not just the software

  • Document what parts of your current workflow will have to change to make the platform work as designed. A good document names specific workflow changes, where they show up, and who is affected first.
  • Conduct a review of the future-state operating model, not just the future-state screens. You should be able to point to what changes in ownership, decision-making, handoffs, and daily work.
  • Determine if the vendor has a structured plan for helping you absorb the change, not just stand up the tool. Expect to see steps, owners, and timing.

Pick the partner who is strongest for your real workflows, not their best demo flow

  • Conduct a hands-on taste test with your build team in a test environment and see how quickly they can build your real workflows. A good result is that your team can build and test meaningful pieces of real work in a reasonable amount of time.
  • Have the people who will actually live in the platform test it, not just the people buying it. You want feedback from end users that is specific, practical, and tied to the real work.
  • Score the system on how it handles your real work, not how pretty the demo is. A good scorecard favors fit, speed, flexibility, and usability in your environment.

Pick the partner who can explain what gets harder, not just what gets easier

  • Ask the vendor to list what gets harder if you choose their platform. A good answer is candid and specific, not “there really aren’t many tradeoffs.” Then have a review completed of the tradeoffs you are accepting by choosing this platform. It should be able to cover those tradeoffs in plain English.
  • Identify where the platform has limits in reporting, integrations, workflow flexibility, administration, and digital capability. You want named boundaries, not polished language.
  • Have the vendor provide what workarounds are common in their customer base. A good answer includes examples of where customers usually have to adapt.

Pick the partner who treats readiness, training, and adoption as part of the deal

  • Complete an inventory of what training is needed for agents, supervisors, admins, and support teams. The inventory must break training down by audience and by what each group must do differently.
  • Conduct a readiness check with IT, operations, training, reporting, and admins before final selection. A good result is that each group can name its role, risks, and open needs.

Pick the partner who has a credible post-go-live support model

  • The vendor should have an escalation matrix, expect them to provide specific examples along with the matrix. Review the matrix with business owners, IT, admins, etc. and get sign-off that the matrix and examples meet business needs.
  • Ask for examples of how the vendor stayed engaged with customers after a rough launch. A good answer sounds like a real support story, not a brochure.
  • Review the vendor’s handoff model from sales to implementation to support to your technical account manager (TAM). A good model shows who owns what and where issues go when something stalls.

Pick the partner who will still look good when something breaks

  • Determine what happens when the vendor thinks the issue is your network, your CRM, or your process. You want a clear description of how they stay engaged until the issue is truly understood.
  • Conduct reference checks focused on bad days, not good ones. A good reference tells you the vendor was responsive, honest, and helpful under pressure.

Pick the partner your organization can survive, support, and sustain

  • Conduct a fit check on current-state product capability against your real must-haves. A good result is that the true must-haves work now without heroic effort.
  • Build an inventory of what internal support burden your team will carry after launch. A good list is realistic about admin load, reporting work, integration support, and user support.
  • Determine whether your organization has the time, skills, and bandwidth to support what you are buying. A result comes from your side as much as the vendors and should be uncomfortable if the answer is no.

In conclusion

A migration like this will be hard. That’s exactly why we have to remember we’re not just buying a platform, we’re buying the future we’ll have to live with and a partner we’ll depend on when that future gets messy.

Because the real test of a system doesn’t happen on the day we sign the contract. It comes later, when something breaks and we find out who’s still standing next to us. Choose wisely. 

Ready to work with a partner who stands with you the whole way?

Contact Vertical to start the conversation.


About the Author