Home Articles Serverless Best Practices in software development

Serverless Best Practices in software development

Serverless Best Practices in software development
Serverless Best Practices in software development

Serverless function-based compute platforms like AWS Lambda are built for scale. They automatically provision computing resources as needed and are designed to handle tens of thousands of requests per second. This makes them a great fit for modern web applications and APIs.

But “serverless” doesn’t mean you don’t have to think about servers or architecture anymore, or that you can completely ignore best practices used in software development. It just means you don’t have to worry about infrastructure management and scaling, so your focus can shift to building new features and delivering value faster, which is the ultimate goal of serverless.

Here are some best practices we’ve learned at Serverless over the past few years while building serverless applications on AWS Lambda.

What is serverless?

Serverless computing is a cloud-computing codeless execution model in which the cloud provider runs the server and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It is a form of utility computing. The name “Serverless Computing” was created by Amazon Web Services (AWS) to describe AWS Lambda.

Serverless best practices

Start locally

From day one, this is the best practice for working with serverless code. If you can code it locally, you can likely deploy it to AWS Lambda and run it successfully in production. Start locally and build your function using the same language runtime and same SDK as you would use on Lambda and AWS. The role of Lambda is to execute your code on-demand, so focus on writing good code first and then start exploring how to deploy that code on Lambda.

Use 1 function per route

This will help in debugging and code maintenance. If you want to change the execution path of your API, you can do it by changing a single file instead of making changes to multiple files and routes.

Use error handling middleware

Anything can go wrong in your API request and you should be prepared for it. Your API might get a request that is not valid or an internal error can occur during the processing of the request. You should be able to handle these errors gracefully and inform the client about what happened and the possible next steps for them.

Log everything

Logging helps in debugging and monitoring the application performance. It also displays the API usage pattern which could be used for user segmentation, product improvement, etc…

Use AWS Lambda Layers

You can use AWS Lambda layers for common package dependencies across all functions so that you don’t have to include them in every function separately. This also reduces deployment package size which in turn decreases deployment time. A Lambda layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package.

Manage code, not configurations

The serverless programming model requires a different approach to configuration management. Rather than managing configurations across all your services, you should manage code. You can use Lambda layers to do this. Layers allow you to separate concerns and reuse code across all the functions. As a best practice, use Lambda layers to manage shared dependencies like libraries, frameworks, SDKs, or runtimes. This approach also has the benefit of reducing deployment package sizes (and thus deployment times) because only changes in your function code need to be packaged and deployed.

You can also implement configuration management through environment variables and configuration files such as .yml files that are included with your function code at deploy time.

As an anti-pattern, don’t put any configurable parameters in your function code.

Perform load testing

This is very important to ensure that the capacity of your applications is in line with what you need, especially during high volume periods. You can do load testing manually by having a group of people run through your application and perform different actions at the same time. However, the even better approach is to use a load testing tool that can simulate thousands of users sending requests to your application simultaneously. One such tool is Apache JMeter.

Serverless best security practices

Deploy API gateways for security

API gateways are a standard feature of modern software architecture, and they have several important functions, including the handling of authentication and authorisation. API gateways provide a single point of entry for a variety of services and allow you to hide direct access to other downstream services. This can make it easier to work with third-party APIs and also provide some added security. If you are using an API gateway to connect with other services, be sure to use HTTPS protocols throughout so that you don’t accidentally expose sensitive data at any point in the process.

In addition to providing simple API endpoints, API gateways can also handle more advanced features like analytics, rate limiting, and monitoring. As your application grows, these features can become increasingly important as they help you optimize your app for performance and scale.

Properly Handling Secrets

To secure your serverless applications, you must focus on managing the secrets that your functions use. The first step is to avoid hardcoding secrets into the code itself. Hardcoded secrets are a security concern because they are visible to anyone who can view the source code. AWS Lambda provides a Secrets Manager that you can access from any function, and it makes it easy to rotate secrets without redeploying your application.

In addition to using a secret manager, you should also ensure that you specify minimum permission policies for the IAM roles associated with each function in your application. These policies describe which resources each function can interact with and which actions they are allowed to take on those resources. Properly configured minimum permission policies make it harder for an attacker to leverage one function to gain access to other resources or privileged permissions within your account. You can learn more about IAM best practices in the AWS Security Best Practices whitepaper.

Both of these best practices help reduce the blast radius of a compromised function by limiting what it is capable of doing within your AWS account. This means that even if an attacker does manage to get access to one of your Lambda functions, the damage they’re able to cause will be limited.

Limiting Permissive IAM Policies

When using a serverless stack, most of the permissions for AWS resources need to be set in an IAM role. In general, these roles should be given the least amount of permission needed to function properly. However, this can be difficult to accomplish with serverless functions because the code is not necessarily known at the time the role is created. Therefore, it is common practice to give a serverless function’s role full access to AWS resources.

This is a very dangerous practice and should be avoided! When you grant a role access to all AWS resources, you are inviting disaster. You must assume that in the future someone will accidentally deploy malicious code on your serverless stack. This can happen in many ways:

  • Multiple developers could deploy the code that they wrote while experimenting on your multiple AWS accounts.
  • An intern could copy and paste code from a StackOverflow example that has malicious intent (this happened).
  • A disgruntled engineer could deploy code that intentionally deletes your production database.
  • A compromised CI/CD pipeline could upload malicious code without anyone noticing.

Restricting Deploy Times

It’s just not safe to assume that your functions will always be able to execute in time when you deploy serverless applications. They might take a little longer for some reason, or you might have a sudden spike in traffic, or something else could happen that causes an unforeseen delay. These delays can lead to timeout errors and even serverless deployment failures, which are far from ideal. To prevent this, set up a function configuration with the minimum and maximum amount of time you want the function to be allowed to run. If the execution time exceeds the maximum limit, it will automatically terminate. This will give you peace of mind knowing there’s at least one safeguard in place to protect your deployments from becoming too lengthy.

Consistent Conventions

Consistency is the key to building a product that is easy to use, understand, and maintain. Proper conventions around naming, configuration, and directory structure will help you build a serverless architecture that scales with your codebase. Here are some of the conventions you should consider:

Naming Conventions

One of the first things to consider when creating a new service is how you are going to name everything. In addition to naming particular service functions, components, events, etc., make sure to have a process for naming environment variables. My recommendation is to use one word in all CAPS for environment variables. This will be consistent with the other services and frameworks you use. It also makes it easy to search through your codebase for instances of that variable because it won’t be confused with any other word in your application code.

Directory Structure

Serverless apps are highly scalable, so it makes sense that they can grow quickly as well. To keep your codebase maintainable as it grows, I recommend having at least one more level of directories than AWS has defaults for.

Takeaway

Serverless computing is a powerful way to build web applications and APIs with high availability, fault tolerance, and low maintenance costs. It requires an architectural shift, but the payoff is a faster time to market and a new way to focus on your product while leveraging existing computing services. The potential pitfalls of serverless architectures should be addressed carefully to ensure your cloud project will deliver what you need.

Hopefully, this article will help you “think outside the box” and approach a serverless application with the right mindset. There is no silver bullet here, so make sure to combine your learnings with your specific context and projects to find what works best for you and your team. In the end, it’s not about doing things a certain way, but rather delivering business value faster, cheaper, and more efficiently—which should be every developer’s main priority anyway.