It wasn’t until relatively recently that physical servers were the backbone of every data centre. While on-premise servers have by no means disappeared, more and more infrastructure components are finding their way into the cloud every year and hype around serverless computing is growing. But what is it?
Serverless computing is a form of software architecture whereby the responsibility for time and resource-intensive infrastructure management processes such as patching, securing and scaling servers is outsourced to cloud providers. This means resources spent on building, scaling and securing can now be more profitably invested in the applications themselves. Though the name suggests otherwise, serverless computing cannot do without servers entirely – physical infrastructure is needed, but only on the cloud provider side.
The cloud provider, whether it be AWS, Google, Azure or another, is also responsible for the dynamic allocation of resources to individual applications and tasks. This brings several advantages.
Greater ability to innovate
Container orchestration and mesh management using serverless methods fundamentally change the daily business of internal company teams in infrastructure management, redefining their objectives. It means they are now the beneficiaries of an infrastructure that is optimally adjusted for them, and that they no longer have to build, scale and secure it themselves.
With the responsibility for infrastructure management in the hands of cloud providers, serverless computing frees up organisations’ time and money. They can channel resources into ensuring their applications function better, creating the opportunity to innovate faster and more easily.
Serverless also promises considerable advantages in terms of scaling. A high degree of configuration preparation is not required, so it is therefore far more agile than conventional scaling based on cloud servers. For example, AWS Lambda has proven itself in the scaling of any number of short-running tasks. It doesn’t matter whether there are hundreds of events or several billion – anything is possible.
This can be explained with the hypothetical use case of a broadcaster’s sports app providing football fans with the latest match scores and real-time statistics on top players, such as the number of goals scored to date in their career. Assume this service provider originally developed its applications on-premise and, given its monolithic server structure, now has a classic scaling dilemma. As part of modernisation efforts, the user could rewrite code and extend it with Lambda. For every single move on the playing field – short, event-oriented tasks – a data object is created, forwarded to the servers and processed by them. The app developers can focus on building the best app possible without also having to manage servers.
More efficient access validation
Serverless is invaluable for users needing to get through high volumes of small tasks. This is perfectly demonstrated, for example, when customers are granted access to a specific resource. Take video games on smartphones for instance. In order to display high scores to users, they need access to the data on the backend servers. This involves requesting a number of different short-term tasks in the app’s backend. In the specific case of Pokémon Go, the developers couldn’t have anticipated just quite how successful the game has become. Nevertheless, they were able to scale and handle high data loads very well because they had designed the backend to scale quickly from the beginning. Serverless technologies like AWS Lambda make this possible.
Serverless computing is still in its infancy however, and there are some teething problems and disadvantages to consider before investing in this approach.
Increased system structure complexity
As much as it simplifies infrastructure issues, serverless computing means more complexity elsewhere. Its excellent suitability for many short-lived tasks automatically leads to a significantly higher number of these. Each of these tasks may be greatly simplified in itself, but the increased volumes of transitions and interfaces also makes the overall system structure more complex to monitor. Examples include increased configuration effort, additional deployment scripts and tooling artefacts. As a result, the development of software is more efficient, but the continuous checking and monitoring of the applications is more complicated and harder to achieve.
Potential impacts on performance
Another setback is that the performance of individual requests is much less predictable with serverless and precise performance forecasts are therefore much more difficult. One request may take three milliseconds and another one hundred times that. However, as an application scales, its performance also becomes more predictable. One reason for this is that the more data that is cached and the more often it is accessed, the better the performance.
Upfront costs can vary considerably, with investments in serverless computing possibly reaching between three and seven figure sums. Investment can prove extremely cost-effective long term though, so when making the case for the technology, organisations should consider each provider individually to make sure they maximise ROI. Questions to ask include:
– To what extent can configurations, guidelines, technologies etc. be modified as the company continues to grow?
– Are integrations with other providers possible?
– Can new, innovative services be easily added?
From a greater ability to innovate to instant scalability, serverless computing offers huge advantages. There are downsides to fully consider too though, particularly as serverless is still in its infancy. Spending time researching each vendor option and undertaking thorough cost-benefit analysis is therefore highly recommended before investing to make the most of this groundbreaking technology.