“No vendor lock-in!” is a popular sales slogan, but it turns out it’s not why customers buy. I’ve written about this before and won’t belabour the point here.
The slogan is not only somewhat inaccurate (the minute you make a commitment to any enterprise software, you’re locked in because the cost of changing course is never zero), but worse, it overlooks enterprise priorities.
Yes, it’s great to have options to change course more easily if a given technology choice doesn’t work out, but the reason enterprises have embraced open source and cloud has nothing to do with what they’re trying to avoid and everything to do with what they’re trying to get: convenience, performance, flexibility.
Don’t believe me? Just look at serverless adoption.
Serverless shackles all the way down
If you’re a particular brand of open source warrior, serverless is your kryptonite. Years ago, then-CEO of CoreOS Alex Polvi called serverless “one of the worst forms of proprietary lock-in we’ve ever seen in the history of humanity.” OK, then.
The reason, he went on, is because with serverless the “code [is] tied not just to hardware ... but to a data centre,” meaning “you can’t even get the hardware yourself. And that hardware is now custom fabbed for the cloud providers with dark fiber that runs all around the world, just for them.”
In other words, the code in serverless is so inextricably linked to a cloud provider’s technology and operations, that the freedom to port the software elsewhere is effectively meaningless.
Sure, there are ways to minimise “lock-in” with serverless. Wisen Tanasa of Thoughtworks offers a few suggestions, like choosing a cross-vendor programming language and picking good architecture patterns to minimise the costs of migrating unit tests.
But as important as it is to be looking for the exit when choosing a new platform or technology, it’s even more important to consider why you’re choosing it in the first place. This is where serverless shines.
Leading Edge Forum researcher Simon Wardley shares an example of a global insurance company that adopted serverless: “During the time it took for the vendors to come back with quotes to the RFP, we had the system built in serverless, in production, and reducing cost per transaction from $20 to eventually 8 cents,” said one of their executives.
“You can have a serverless team focused and taking on reducing costs of transactions, speed of claims, and the tasks at hand, or you can have a team focused on the intricacies of container distribution. We’re an insurance company. We care about customer outcomes, not infrastructure clusters.”
Outcomes, not technology. As companies think about what they want to accomplish, concerns about serverless lock-in recede and concerns about losing out to faster-moving competitors take precedence.
Serverless for you and for you and for you…
Datadog recently released data that shows that companies seem to be getting smarter about serverless, with adoption of Amazon Web Services' (AWS) function-as-a-service (FaaS) offering, Lambda, up 3.5 times in early 2021 versus two years ago. Nor is it just AWS.
As the report notes, during the past 12 months, the share of companies running Microsoft Azure Functions rose to 36 per cent from 20 per cent. On Google Cloud, nearly 25 per cent of organisations now use Google’s Cloud Functions. Fast adoption, much?
Though serverless has been enabled by the clouds, serverless functions aren’t simply a big cloud game. As Vercel CEO (and Next.js founder) Guillermo Rauch details in the Datadog report, “Two years ago, Next.js introduced first-class support for serverless functions, which helps power dynamic server-side rendering (SSR) and API routes.
Since then, we’ve seen incredible growth in serverless adoption among Vercel users, with invocations going from 262 million a month to 7.4 billion a month, a 28x increase.”
From such examples, and many others (including ever shorter function invocation times, which indicate that enterprises are becoming more proficient with functions), it’s clear that serverless computing has taken off. Vendors will continue to press the “no lock-in” marketing button, but customers don’t seem to care. Rather, they may care about lock-in, but they care much more about accelerating their time to customer value.
In enterprise computing, as in life, there are always trade-offs. The cost of a perfectly lock-in-free existence is lowest-common-denominator code that is generic across hardware/cloud platforms. That may be a safe existence. It turns out, however, to not be a particularly productive one.