As cloud deployment continues to grow in recent years in response to business needs for flexibility, cost savings, innovation, and digital transformation, organizations face new challenges and opportunities that impact business operations.
Presenting at the recent CIO Future of Cloud event, Dave McCarthy, Research Vice President, Cloud Infrastructure Services at IDC, shared IDC’s forecasts for the global cloud for 2022, focusing on four forecasts that he said will be important for companies. over the next one to three years.
What follows are edited excerpts from this presentation. For more IDC insights as well as survey diagrams, watch the video embedded below.
Regarding the modernization of the application:
By 2024, most legacy applications will receive some investment to upgrade, with cloud services being used by 65% of applications to extend functionality or replace inefficient code.
So what does this mean? For me, this means that applications – as they go through a process of modernization – will take different forms. And I think that like many things in life, we always want things to be somehow absolute, as if everything will be modernized. But in reality, when you look at how companies think about it, there is always a spectrum around the applications that they think are ripe for a complete upgrade and that may have a smaller step along the way ….
[T]Here are a few [legacy applications] which may never take a full modernization process. However, this does not mean that they cannot take advantage of some of the newer technologies. So, what you will see are companies that embrace legacy applications such as machine learning and AI services. So use the app you already have and use that data to become smarter so you can make faster and faster decisions without necessarily breaking that code base. Or in other cases, you may see someone wanting to bring a new user interface or mobile application design into it so they can extend existing functionality without re-equipping the entire backend.
Of course, people who go downhill in the process of modernization, of course, look at the various tools available. Things like container-based code, considering more automation driven by the API because of the benefits it provides. Things like the ability to react faster or make more detailed app updates. Or, quite honestly, just develop new features faster.
And as companies look to build their flexibility, we will continue to see an increasing amount of application upgrades in all parts of the business.
Of specialized cloud services:
By 2025, in response to performance, security and compliance requirements, 60% of organizations will have specialized cloud services either on-site or at a service provider’s site.
Now the concept of a special cloud is largely tied to or considering it from a hybrid point of view, but even more from the point of view of peripheral calculations. Of course, I think there were people who thought that everything and everything was on its way to the public cloud. But if you really look at how cloud providers are approaching this now, they’ve taken a different approach. I think they’ve realized that there are certain workloads – or certain business requirements – where the cloud just isn’t as efficient or there are some limitations.
For example, much of what you hear about peripheral computing is the need to reduce latency. This two-way journey from where your data originated to the cloud to make a decision and return can be overwhelming, especially in real-time situations. Think about the production environment … these milliseconds matter, they can mean the difference between a safety scenario or product defects.
The other case you see a lot is more control over where the data is. So in Europe we have all heard of the GDPR as a regulation; we have some similar ones in the United States. And the reality is that more and more of them will happen where sovereignty around where the data lives is important.
And then, even more, you begin to see this manifest itself in the context of business continuity. What happens if the public cloud or network between you and the public cloud is suddenly unavailable? We need some way to be able to continue running this application. If you are a retailer, for example, and you have had some kind of interruption at the back of the system, you still need to process transactions. You still need to figure out your inventory.
So special cloud solutions are here to handle this.
For data in the multi-cloud:
Looking for consistency of distributed data, 75% of organizations will implement multi-cloud data logistics tools by 2024, using abstract policies for data capture, migration, security and protection. Now that’s what this multi-cloud story is all about – most companies end up in this place, whether they intend to or not. And as this complexity grows, they need to re-evaluate not only their policy on how I want to deal with things like data retention or how I want to apply security to data, but also to be able to do it consistently in multiple clouds.
When you start in this type of environment, you may be able to deal with it in some manual way. But ultimately, over time, this scalability and simply the potential for human error means that people will continue to invest in more automated tools that can help ensure consistency and manage what people talk about in this concept. of dataops that is really rooted in processes and procedures.
Regarding the cloud economy:
By 2023, 80% of organizations using cloud services will create a special finops feature to automate policy-oriented monitoring and optimize cloud resources to maximize value.
So, this is one of the unexpected side effects of the mass cloud option. The easy rotation of resources reduced much of the friction on the start-up side, but introduced a new problem. He presented the unexpected bill …
And one of the problems is that there has not always been – in many companies – one person responsible for understanding all this. And this is because there are many factors that go into the cost of the cloud. Some of them are architectural; there is a difference between moving monolithic workloads in the cloud compared to if you took advantage of some of these application upgrade techniques to get to container-based or server-free features. [Another factor] is about operations. How closely do you monitor or properly resize the specimens you need with the loads you have? And how quickly do you rearrange them when needed? And do you automate the up and back rotation of resources when they are underutilized? All of this operational efficiency is usually possessed by operational teams.
[A third factor is] trading conditions, which cover much of the cost in the cloud. Do you take advantage of saved or spot instances? Or things like volume discounts on contracts …?
So the challenge is not only that there are these potentially uncontrolled costs, but there is not necessarily a place to go. And so this idea of finops really is, whether it’s one person in the organization or a group of people who assign that responsibility. Because after all, if you have observation and you look at this environment, you can go back to these three areas and find out what levers we can pull? What can we do to make sure we are cost-effective in cloud resources, and how do we think about this as our solutions scale?
This article originally appeared in the CIO’s Center Stage newsletter. Subscribe today!