Every day somewhere in the world servers are coming to the end of life. IT departments are confronted with the decision of whether to replace them or migrate to cloud services. If you are in this position, try to envision how the world will be in five or ten years to see how the decisions you are making today will fit in. Anything that involves a capital expense, such as real estate or IT infrastructure, has long term implications. The equipment you buy today will probably still be in service five years from now and, for better or worse, you will probably be stuck with it.
About the author
David Friend is the CEO and founder of Wasabi Technologies.
Short term thinking about long term problems usually ends badly. Instead of getting ahead of inevitable changes, many organizations wait until the pain gets unbearable, at which point it may be too late. For example, if the world is inevitably moving to electric vehicles, do you really want to be investing in more internal combustion engines? If the world is moving from working in an office to working at home, do you really want to be investing in on-premises infrastructure? If you think about how the world will likely be five or ten years from now, decisions like whether to replace existing infrastructure or migrating to the cloud become much simpler.
I live in New England and the landscape here is dotted with hundreds of brick factory buildings from the late 19th and early 20th centuries. Nearly all of them at one time had their own electric generating plants out back to power the motors in the factories. Today, almost nobody would think of building their own electric plants. I believe the same will be true of data center infrastructure. In most cases, it will be best to leave computer hardware, networking, and data storage to companies that specialize in it and do it more efficiently. Unless your business is selling infrastructure as a service, you probably should be out of that business.
The migration to the cloud is in full swing, and for good reasons, as I will explain.
Focus on core business
As a business leader, I see many companies that spend a lot of time and energy on activities that have nothing to do with the products that they sell. Before the advent of the cloud, organizations had no choice but to run an in-house IT infrastructure. Now, in many cases, these IT organizations and the infrastructures that they built have become institutionalized and resistant to change.
The use of data is core to nearly every business today. If you are in the pharma business, drug discovery is all about data. If you’re in the logistics business, it’s your data and software that tells your drivers where to go, what products to pick from the shelves. If you’re a department store, it’s your data that tells you what to order and when. What does that have to do with racking and stacking servers, buying ever-more storage, worrying about cooling and electricity? Nothing. There’s little or no point to it, it’s a distraction, and ten years from now few organizations will be worrying about such things.
Focus your intellectual energies on making and selling whatever products your company sells and offload the rest.
Capital versus operational expense
On-premises servers are a capital expense (or CapEx). When you buy a server, the tax authorities will require that you depreciate it over time rather than take the purchase price as a current business expense. This means that the money you shell out for the servers cannot be used to offset the current year’s profits, and hence lower your tax bill. The cloud has upended this model by turning infrastructure into an operational expense (or OpEx) that can offset income.
The fact that cloud storage is an OpEx rather than a CapEx is at the heart of its appeal for many businesses, and nearly every company has separate budgets for OpEx and CapEx. Money that is tied up in IT infrastructure cannot be deployed in building new factories, investing in new products, or expanding into new markets. In contrast, a cloud OpEx spend doesn’t tie up your organization’s precious cash and allows you to invest in things that matter most to the business.
Additionally, there are the accounting and tax headaches that come with on-prem hardware. Since a CapEx spend will be used over the course of many years, it must be amortised or depreciated over its lifetime on your balance sheet. In contrast, OpEx spends are fully tax deductible and subtracted from revenue when calculating your profit and loss account. The treatment of CapEx on the company’s balance sheet can be complicated requiring a great deal of accounting expense and can even impact a company’s creditworthiness and borrowing power.
When figuring the real cost of on-prem infrastructure, many people ignore the fact that five years from now, most of it will have to be replaced. If your organization is storing a lot of data, migrating that data to new equipment can be a huge job taking many months of effort and is fraught with risk. My point is that it’s easy to make a decision today that will come back to bite you five years from now. The problem is complicated by the fact that you will probably be adding capacity every year, so there will never be a way to make a clean cut-over to the cloud. These follow-on investments, combined with the risk of obsolescence during a server’s lifetime and the need to replace the server at a future date, all compound the CapEx burden.
Flexibility and remote working
Concerns regarding flexible access to workplace servers have become particularly prevalent in 2020, as many offices have been forced to migrate en masse to remote working patterns. Onprem servers are often more difficult to configure for flexible access, owing to networking limitations and security concerns. As a result, many onprem solutions only allow users to access data and services from the local network – that is, at their workplace or via a complicated and often expensive VPN.
Cloud vendors are in the business of remote access. Their networks, security, and hardware infrastructure are designed for remote access from the ground up. Cloud service providers spend an enormous amount of money and time designing networks that can be accessed everywhere. It makes little sense for each individual organization to try to do this themselves. If people are going to work remotely, doesn’t it make more sense to start with a technology platform that was designed for remote work in the first place?
Moving workloads to the edge
When you go to your local convenience store and look at those horrible little sandwiches that come in plastic boxes, you can bet that they are made in a centralized factory somewhere. If you don’t like your sandwich the way it comes from the factory, tough luck. If you want a sandwich that is freshly made just the way you want it, you should go to the local deli where the chef is right behind the counter. This represents the difference between traditional centralized IT resources and distributed resources at the edge, close to the user.
If I go to my local hospital for an x-ray, the image will be generated locally and the doctor who uses that image will likely be local as well. Does it make sense to ship that data halfway across the country for processing or storage? Cloud service providers are forever opening new locations precisely so that processes and online storage can located close to the users. Individual organizations, even large multi-national enterprises, can’t afford to be everywhere. So everything gets shipped back to a central facility. In the long run, this won’t work.
Again, let’s use the electr
ic power grid as an analogy. We started with every plant making its own electricity. Then the grid developed, and people got out of the business of making their own electricity. Now power generation is again becoming highly distributed, even down to the solar panels on the roof of my house or the windmills that dot the landscape. The difference is that all these sources and consumers of electricity are networked. The solar panels on my roof may be helping to power somebody else’s hair dryer down the street.
Since the dawn of the on-prem data center, the biggest disruption has been the development of the internet. Things can now be distributed and interconnected at the same time. My x-ray can be stored and used locally, but if a consulting physician halfway around the world needs to see it, that’s still possible.
If moving compute and data storage resources to the edge is inevitable, how will a decision to replace the servers in your centralized data center fit into that future? Will this be one of those decisions that you or your successor will regret ten years from now?
Peaks and valleys
The law of large numbers can be a pretty compelling way to gain efficiencies. If I decided for some reason to simultaneous turn on my toaster, hair dryer, clothes dryer, electric stove, and the hot tub, I will blow the circuit breaker for my house. But the power company will hardly notice. If you look at the usage patterns for IT infrastructure, you’ll see that there are dramatic peaks and valleys. Unfortunately, you have to provision for the peaks. By necessity, capacity utilization will either be low or there will be times when your users are getting poor performance.
Public cloud providers like Amazon, Microsoft, or my company, Wasabi, have thousands of customers and not all of them are experiencing peak loads at the same time. Therefore, the overall system can run at a higher average capacity utilization. This spreading of the load across a large number of servers is one reason that the cloud is significantly less expensive than on-prem storage or compute.
Moving to the cloud is not without risk, both technical and financial. Stories abound about customers who receive their first cloud storage bills and are shocked to find out that all the little “extras” have doubled or tripled their storage costs. I am sure the same thing goes for compute. The problem stems from the fact that on-prem storage is hard to meter in the same way that public clouds usually do. For example, while you may know precisely how much data you are storing (just look at “Properties” in a file explorer, for example), most people have no way of knowing how often they touch that data.
The hyperscalers all add dozens of microcharges to your storage bill for egress (taking you data out over the Internet), API calls like PUT, GET, LIST, DELETE operations, and other access charges. Some of the vendors who are challenging the hyperscalers have taken a different approach. Wasabi, for example, charges only for storage – egress and API charges are free. Packet, a subsidiary of Equinix, rents you dedicated compute resources and you pay the same whether you are using 1% or 100% of the compute capacity. My own opinion is that the “pay by the drink” model used by Amazon and the others is too complicated and costs are too hard to predict.
The skill set problem
The people who know how to run a non-stop IT organization are getting harder to find and more expensive as the public cloud competes aggressively for these highly skilled people. For an IT professional, it may be more appealing to work for a company where IT is the business, rather than work for, say, a manufacturing company where IT is simply in a support role.
As more and more of the world’s IT infrastructure moves to the cloud, individual organizations are going to find it increasingly difficult and costly to hire and retain the right people. If your organization loses a key person who was responsible for the architecture of your on-premises IT infrastructure, you may end up with systems that nobody really understands.
The security myth
A common consideration when choosing between a cloud and onprem infrastructure is security. Many people intuitively think that the public cloud is less secure. Studies have shown just the opposite. If an online retailer, for example, has a security breach and inadvertently exposes customer information, it is certainly embarrassing and may cost them some fines, but basically their business of selling stuff goes on. If a public cloud vendor has a serious breach, it could mean the end of the company.
So, cloud vendors place a very high priority on security and generally employ far more security experts than you would likely find in any one individual company. Even though you may have your own firewalls and intrusion detection software, most corporate networks are far more vulnerable to attack than are the leading cloud providers, and the statistics bear this out.
All trustworthy cloud providers tend to benefit strongly from economies of scale in their security: their data centers are subject to 24/7 surveillance, staffed at all hours by operations teams and security professionals, and constantly see state-of-the-art cybersecurity technology implemented.
That means, when properly configured, cloud environments tend to provide security for businesses that surpasses the on-premise solution at an equivalent budget.
Is there a long-term role for on-premises hardware?
Yes. There are always going to be edge cases, and the most obvious ones exist because of extreme performance requirements or compliance issues. For example, video editing requires extremely fast storage, very high-powered GPUs, and practically zero latency between the two. Image recognition may also require dedicated hardware since the volumes of data are so large. In the end, it will be a hybrid world.
The final answer
My view is that the world is on its way to outsource most its infrastructure to the cloud. Today the public cloud is dominated by the three hyperscalers. But the market will fragment over time with many more vendors competing in various segments of market. The proprietary APIs promoted by the hyperscalers, such as Amazon’s S3 storage API, will become de facto standards for new entrants. IT organizations will no longer be worrying about replacing or upgrading on-prem hardware and instead will be turning their attention to figure out what combination of public cloud vendors can do a particular job with the best performance and lowest cost.