Migrating to the cloud? Follow These Steps to Encourage Success

10 Min Read

Enterprise cloud adoption increased dramatically during the COVID-19 pandemic — now, it’s the rule rather than the exception. In fact, 9 in 10 companies currently use the cloud in some capacity, according to a recent report from O’Reilly.

Although digital transformation initiatives were already well underway in many industries, the global health crisis introduced two new factors that forced almost all organisations to move operations online. First, that’s where their customers went. Amid stay-at-home mandates and store closings, customers had to rely almost solely on digital services to shop, receive support, partake in personalised experiences, and otherwise interact with companies.

Second, the near-universal shift to remote work made the continued use of on-premises hardware and computing resources highly impractical. To ensure newly distributed teams could work together effectively, migrating to the cloud was the only option for many companies. And although current adoption statistics are a testament to the private sector’s success in this endeavour, most companies encountered some obstacles on their journey to the cloud.

Barriers to success in cloud adoption

There are several different types of cloud platforms and a variety of cloud service models. To keep things simple, I tend to think of cloud resources in terms of two components: back end and front end. The former is the infrastructure layer. Outside of the physical servers and data centres that every cloud provider is comprised of, the infrastructure layer encompasses everything related to information architecture, including data access and security, data storage systems, computational resources, availability, and service-level agreements. The front end is the presentation layer or application interface, including the end-user profile, authentication, authorisation, use cases, user experiences, developer experiences, workflows, and so on.

Not long ago, companies would typically migrate to the cloud in long, drawn-out stages, taking plenty of time to design and implement the back end and then doing the same with the front end. In my experience working with enterprise customers, the pandemic changed that. What used to be a gradual process is now a rapid undertaking with aggressive timelines, and front-end and back-end systems are frequently implemented in tandem where end users are brought in earlier to participate in more frequent iterations.

Moreover, the pandemic introduced new cost considerations associated with building, maintaining, and operating these front-end and back-end systems. Organisations are searching for more cost savings wherever possible, and though a cloud migration can result in a lower total cost of ownership over the long run, it does require an upfront investment. For those facing potential labour and capital constraints, cost can be an important factor to consider.

Aggressive timelines and cost considerations aren’t roadblocks themselves, but they can certainly create challenges during cloud deployments. What are some other obstacles to a successful cloud integration?

Attempting to ‘lift and shift’ architecture

When trying to meet cloud migration deadlines, organisations often are prone to provision their cloud resources as exact replicas of their on-premises setups without considering native cloud services that can offset a lot of the maintenance or performance overhead. Without considering how to use available cloud-native services and reworking different components of their workflows, companies end up bringing along all of their inefficiencies to the cloud. Instead, organisations should view cloud migration as an opportunity to consider a better architecture that might save on costs, improve performance, and result in a better experience for end users.

Focusing on infrastructure rather than user needs

When data leaders move to the cloud, it’s easy to get caught up in the features and capabilities of various cloud services without thinking about the day-to-day workflow of data scientists and data engineers. Rather than optimising for developer productivity and quick iterations, leaders commonly focus on developing a robust and scalable back-end system. Additionally, data professionals want to get the cloud architecture perfect before bringing users into the cloud environment. But the longer the cloud environment goes untested by end users, the less useful it will be for them. The recommendation is to bring a minimal amount of data, development environments, and automation tools to the initial cloud environment, then introduce users and iterate based on their needs.

Failing to make production data accessible in the cloud

Data professionals often enable many different cloud-native services to help users perform distributed computations, build and store container images, create data pipelines, and more. However, until some or all of an organisation’s production data is available in the cloud environment, it’s not immediately useful. Company leaders should work with their data engineering and data science teams to figure out which data subsets would be useful for them to have access to in the cloud, migrate that data, and let them get hands-on with the cloud services. Otherwise, leaders might find that almost all production workloads are staying on-premises due to data gravity.

A smoother cloud transition

Although obstacles abound, there are plenty of steps that data leaders can take to ensure their cloud deployment is as smooth as possible. Furthermore, taking these steps will help maximise the long-term return on investment of cloud adoption:

1. Centralise new data and computational resources.

Many organisations make too many or too few computational and data analytics resources available — and solutions end up being decentralised and poorly documented. As a result, adoption across the enterprise is slow, users do most of their work in silos or on laptops, and onboarding new data engineers and data scientists is a messy process. Leaders can avoid this scenario by focusing on the core data sets and computational needs for the most common use cases and workflows and centralise the solutions for these. Centralising resources won’t solve every problem, but it will allow companies to focus on the biggest challenges and bottlenecks and help most people move forward.

2. Involve users early.

Oftentimes, months or even years of infrastructure management and deployment work happens before users are told that the cloud environment is ready for use. Unfortunately, that generally leads to cloud environments that simply aren’t that useful. To overcome this waste of resources, data leaders should design for the end-user experience, workflow, and use cases; onboard end users as soon as possible in the process; and then iterate with them to solve the biggest challenges in priority order. They should avoid delaying production usage in the name of designing the perfect architecture or the ideal workflow. Instead, leaders can involve key stakeholders and representative users as early as possible to get real-world feedback on where improvements should be made.

3. Focus on workflows first.

Rather than aiming for a completely robust, scalable, and redundant system on the first iteration, companies should determine the core data sets (or subsets) and the smallest viable set of tools that will allow data engineers and data scientists to perform, say, 80% of their work. They can then gradually gather feedback and identify the next set of solutions, shortening feedback loops as efficiently as possible with each iteration. If a company deals with production data sets and workloads, then it shouldn’t take any shortcuts when it comes to acceptable and standard levels of security, performance, scalability, or other capabilities. Data leaders can purchase an off-the-shelf solution or partner with someone to provide one in order to avoid gaps in capability.

No going back

Cloud technology used to be a differentiator — but now, it’s a staple. The only way for companies to gain a competitive edge is by equipping their data teams with the tools they need to do their best work. Even the most expensive, secure, and scalable solution out there won’t get used unless it actually empowers end users.

Kristopher Overholt works with scalable data science workflows and enterprise architecture as a senior sales engineer at Coiled, whose mission is to provide accessibility to scalable computing for everyone.

Share This Article
Exit mobile version