Companies are realizing that the cost of cloud computing could outweigh the benefits, leading many to migrate to localized servers, finds Satyen K. Bordoloi.


Faced with a financial crunch, Elon Musk did what only he could: : pulled the plug on a backup cloud server of Twitter in Sacramento to save $100 million. Since then, I’ve noticed my Twitter getting glitchy, but the savings it has brought about in the company justify it for Musk. Something not entirely different is happening across the world when it comes to cloud computing.

As per this BBC report, US-based 37signals, which has millions of users on its project management and productivity software, moved from outsourcing data storage and computing to a third-party cloud service provider to bring it in-house. Despite the costs of buying hardware to host on location and hiring people to run and maintain it, 37signals saved $1 million. As the report illustrates, they are not the only company doing so.

Not scared of getting his hands dirty, Elon Musk ripped off a backup data centre of Twitter to save $100 million. (Image Credit: Wikipedia)

FLOATING CLOUDS THROUGH THE AGES:

While the term cloud computing is new, its origin can be traced back to the mainframe era of the 1960s, when large corporations began sharing computing power. The subsequent decades saw the evolution of time-sharing systems, data centres, and managed services, laying the groundwork for today’s cloud-based solutions.

The advent of the internet in the late 1990s accelerated this trend. Companies like Amazon pioneered Infrastructure as a Service (IaaS) – IT infrastructure like computing, storage, and network resources on a pay-as-you-go basis over the Internet – in the early 2000s. Platform as a Service (PaaS) – a complete development and deployment environment in the cloud; and Software as a Service (SaaS) – application software hosted on the cloud and used over the internet via a web browser or mobile – followed.

Amazon, Microsoft and Google became some of the biggest players in these spheres. The increase in complexity of computing systems, networks, and technologies has meant that ‘cloud’ now can cater to every need of any company and allows for scalability, flexibility, and cost-effectiveness, leading to widespread adoption. But, there’s a spanner in the wheel for PaaS, IaaS, and SaaS.

While cloud computing on public data farms has its advantages, a clear SWOT analysis is needed to see if cloud computing is actually proving to be expensive to your company. (Image Credit: Wikipedia)

HEADWINDS FOR THE CLOUD:

Using cloud computing resources is like renting computing and skill required on the location of the service provider, rather than your company premises. Why buy a car when you have the convenience of Uber, right? This works only if renting is cheaper than buying. E.g., if you spend a couple of hundred rupees a day on travel, Uber makes sense. But if you’re spending, say, 2,000 or 3,000 a day on travelling around, with the cost of a year’s travel, you could own your car and save daily.

That is what companies like 37signals are realizing. Yes, you have the advantage of faster internet speeds, sitting back, relaxing, and focusing on your business while someone else handles your computing woes. But it also means that your business would also be affected when there is an outage on the service provider’s side. At times, when the computing and software requirements of a company aren’t too large, it could be like bringing a nuclear bomb to hunt sparrows.

Then there is the lock-in with the service provider. If you are a company with multiple applications running on AWS, but you realize a new application would be better with Microsoft’s Azure, you can’t do so because of vendor lock-in agreements. Services could also encounter unexpected slowdowns. All cloud service providers are shared resources. If they are not in use, you get more bandwidth, but in case of a strain on the system, who does the cloud service provider give precedence to? Also, with proprietary software and processes, keeping them on the cloud could expose them to prying eyes.

With time, clarity has arisen, and today it’s possible to break down cloud computing costs to its smallest components. Even a simple back-of-the-envelope calculation can tell a company if bringing data and computing back home could be feasible.

Thus, security concerns, unexpected costs, performance issues, compatibility problems, and service downtime have led to ‘repatriation.’

Cloud repatriation could be a messy affair but it could save a company millions. (Image Credit: Lexica Art)

CLOUD REPATRIATION – BRINGING THE DATA BACK HOME:

The process of setting up cloud computing is called cloud migration; the reverse, i.e., bringing your data home to an on-premise infrastructure or a private cloud, much like what 37signals is doing, is aptly called cloud repatriation.

Cloud repatriators move their data, applications, software, and workloads from a shared, public cloud environment back to an on-premises or local data center. The benefits have been alluded to: cost reduction, enhanced security and control as organizations gain greater control over their data and infrastructure, performance improvement, better compliance with companies meeting specific data residency or regulatory requirements, disaster recovery as on-premises infrastructure could serve as backup for recovery like, say, after a ransomware attack etc.

While this has its merits, repatriation is not a one-size-fits-all solution and should be headed into carefully.

Though cloud repatriation can reduce recurring costs, the initial expense for hardware, software, and skilled personnel to run both could be substantial. Secondly, the complexity of migrating applications and data from a cloud environment to one on-premises can be technically challenging. Third is the expensive and time-consuming process of moving large volumes of data; they are vulnerable during transportation. Then there is the manpower problem: where do you find all the people required to run the programs in-house? And let’s not forget that becoming reliant only on on-premise architecture could create its form of vendor lock-ins.

Image Credit: Vecteezy

CLOUD COMPUTING IN THE AGE OF AI:

From PaaS, IaaS, and SaaS to now AIaaS – AI as a Service – the marriage of cloud computing with AI is driving unprecedented innovations. Cloud platforms provide AI companies with the ideal infrastructure for AI workloads, offering the scalability, computing power, and storage required for training and deploying complex models. Cloud repatriation in AI companies is rare and usually done only by those who figure out patterns in their workload that aren’t utilizing cloud resources effectively and could lead to reduced costs, or those who want to protect their data or software pipelines.

However, a new hybrid model of operation seems to be gaining traction.

The rapid growth of services in the internet age, has been made possible by cloud computing.
(Image Credit: Lexica Art)

HYBRID STRATEGY:

Some companies adopt a hybrid approach, combining cloud and on-premises resources to optimize performance and cost. A hybrid cloud strategy combines the benefits of both public and private clouds, offering companies a flexible and adaptable approach. It allows organizations to balance performance, cost, security, and control.

Under a hybrid strategy, workloads can be strategically distributed across both private and public clouds based on factors like performance, cost, security, and regulatory compliance. While sensitive data and processes can reside on-premises, some other workloads can be put into a public cloud.

To take the Uber analogy, a hybrid cloud strategy is like having your car in the garage, yet choosing to travel on Uber based on requirements.

However, the successful implementation of a hybrid cloud strategy is not easy. It requires accurate workflow assessment, seamless infrastructure integration, implementation of robust security and compliance measures, and a constant eye on cost management. It requires not just a one-time careful planning and execution, but constant awareness about the workload dynamics between your cloud and on-premise architecture.

A couple of months after unplugging backup Twitter servers in Sacramento himself, Elon Musk admitted it was a mistake. Twitter remained glitchy for months. It still is in many parts. What he managed, however, was to show that cloud migration, even a knee-jerk one, is possible. One need not do it like Musk did, but if it saves costs, do it one must.

In case you missed:

Satyen is an award-winning scriptwriter, journalist based in Mumbai. He loves to let his pen roam the intersection of artificial intelligence, consciousness, and quantum mechanics. His written words have appeared in many Indian and foreign publications.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved