I had intended to give a resentation on this topic at ISC² Secure Summit in London in September. For private reasons this didn’t work out so I thought I’ll share my thoughts in this blog post. This is by no means an exhaustive treatment of this topic but might be useful to get you started.
All examples below are based on AWS for two reasons: they are the market leader and that’s what I know best. Microsoft (Azure functions) and Google (Cloud functions) also provide the building blocks for Serverless so the recommendations below should apply for these as well.
What is Serverless
First of all let’s briefly introduce what Serverless means. The most succint definition I came across is this definition from Simon Wardley on Twitter:

Let’s break this up a bit. In talking about Serverless you will read or hear the following acronyms a lot:
- Function as a Service (FaaS): this is the option (AWS Lambda) to deploy and execute functions without thinking about the infrastructure these run on.
- Backend as a Service (BaaS): these are utility service that functions can use to create complete applications such as databases (DynamoDB), storage (S3) or authentication options (AWS Cognito).
Functions are stateless and mostly event-driven while backend services allow the developer to focus on the business logic and reuse utility services. Maybe the most relevant part of all that is that it’s a code execution environment only. These functions are short-lived and the provider takes care of scaling as it is required.
If you want more details from a developer’s point of view I recommend Martin Fowler’s article Serverless Architectures.
In summary: Serverless is an architectural style not a specific technology which can be used in two flavors:
- Building event driven applications that rely heavily on backend services, for example to provide authentication or storage. This approach fits very well to microservices: each function is a specific microservice with a specific responsibility.
- Functions as “glue code” to connect different applications
A mockup for a serverless application could look something like this:

Why does this matter?
You might think that this is just the next buzzword and hype but not really relevant. I disagree for the following reasons:
- Developers will love it: They have less things to care about (servers and infrastructure are somebody else’s problem). From a development prospective their biggest gains tend to come from application velocity.
- The CFO loves it too: Velocity is crucial for digital transformation so anything that allows for faster implementation is good. More importantly it’s a true pay for what you use model and example calculations show tremendous savings.
- Security folks however probably don’t love it: A traditional security approach won‘t work anymore. If we insist on the usual segregation of development and security we slow everyone down.
What changes
Standing against developers and the CFO is typically an uphill battle. Thus we better adapt and look into Serverless security specifically. It’s still the data of our company we want to protect.
We need to understand that our risks remain largely the same, but mitigations need reimagining.
Traditional security only partly applies
In traditional environments a lot of security concern is about hardening systems, watching for vulnerabilities and how to patch them, In a serverless environment we don’t even know which systems are used and the provider may swap them at any moment without us noticing. All that is no longer our job. It’s done by the Serverless cloud provider. This is in fact an additional advantage of serverless for most organisations.
A Serverless application doesn’t have a clear perimeter. Therefore typical network security approaches like firewalls and IPS systems are largely not relevant. Also most WAF systems don’t work well with Serverless.
The way Serverless applications are glued together makes static application security scanning difficult at least.
Due to the fact that functions are short-lived and not bound to a specific server traditional security monitoring of servers won’t work anymore.
Serverless creates new issues
On top of the fact that many of the things we’re used to don’t apply this architectural style creates its own issues.
From the picture above you can see that serless applications increase system complexity rather than decreasing it. The potentially huge amount of functions leads to an increase in attack surface.
Applications turn from a single monolith into a complex beast of communicating services. This means there’s a lot more data communication involved: what used to be inter-process communication within the monolith goes outside on the network.
Autoscaling and the ephemeral nature of functions makes security visibility more complex (which deviations from normal should we be worried about?).
Finally DDoS attacks can take on a totally new form: while the platform will to a large extend autoscale to address the increase in demand this also leads to steep increase in cost. DDoS becomes a “denial of wallet” attack.
What should we do to adapt
Not all is bad and dangerous here. Adapting to this new environment can in fact improve our organisation’s overall security and lead the way to a zero trust model.
I’m convinced we will only succeed if we move left and up:

I like to think about security in three dimensions: people, process and technology. All of these are relevant for securing serverless environments:
People
Because serverless is also about velocity we need to integrate Security into development and deployment. This requires a few changes. the most important one in my opinion to overcome requestor/approver relationships. Too many security organisations don’t work with the development teams but require them to request approvals, often late in the game.
We need to bridge this gap between Security teams and Development teams by providing security solutions during development and - if possible - work within these teams: “Show don’t tell”. This also requires to learn enough about the programming languages and frameworks developers use to be able to understand their approach and concerns.
Process
-
Know the environment
You need to know your serverless provider and to what extent you can trust him. Clarify upfront which parts of the shared responsibility model are your turn and make sure you address those properly.
Since serverless is utility-based a crucial step is to know which of these utility services are being used (3rd party services and platform services). You still need to manage their vulnerabilities. The same applies to all libraries developers use: know what these are and manage their vulnerabilities.
-
Know the CI/CD pipeline and how to secure it
If there is no CI/CD pipeline this is the first thing to establish. You want to know what gets into production and be able to perform Security checks during check-in and testing.
Insist on automation as much as possible using infrastructure as code: the beauty of an automated process beside velocity is that it provides an improved asset overview, faster rollback if needed and also a full audit trail.
You also want to make sure you identify any unused functions during this process. These are often left around for a long time and increase the attack surface.
Technology
-
Know Application Security
Application security is even more important than in the past. If you haven’t yet I strongly recommend you familiarize yourself at least with the OWASP Top 10. All of the OWASP top 10 still apply to us, including SQL, NoSQL and other forms of injection attacks.
Serverless leads to new Data injection vectors: we need to check not only user input but all event data input. Put simply: each function needs to treat any input as hostile and check it accordingly.
-
Know the services configuration
We need to work with the developers to understand how a secure configuration of all services can look like. If that is clear it can be integrated into the deployment pipeline to ensure the configuration stays that way.
Almost all functions and services will need access to data storage, e.g. S3 buckets. You want to tightly control this access with a strict principle of leat privilege.
Setting up these controls once isn’t sufficient.We need to ensure they emain as agreed. AWS config rules can be used to check for any unintended changes.
-
Know where your credentials are
Leaked AWS credentials remain a major issue and can potentially impact any organisation that uses AWS. Make sure your AWS credentials don’t leak (you know not to store them on Github repos, don’t you?) and revoke them immediately if that happens.
All these functions need credentials and the data need encrpytion. Hence it’s important to centrally store and manage application secrets.
-
Know how to authenticate users
Broken authentication is one of the biggest risk in any web applicaton. Similarly to encryption develoeprs should avoid rolling their own solution and rather use standard services such as AWS Cognito or Auth0. While using these services can make authentication and authorization for your users easier keep in mind to watch these services for any vulnerabilities that are reported.
-
Know your IAM roles
We are still responsible for securing our users’ data both at rest as well as in-transit. API Gateway is always publicly accessible, so we need to take the necessary precautions to secure access to our internal APIs, preferably with IAM roles.
Functions are often given too much permission because they’re not given individual IAM policies tailored to their needs, a compromised function can therefore do more harm than it might otherwise
Using these roles follow the principle of least privilege: make sure that each function has its own custom user-role, and that it runs with the least amount of privileges required to perform its task properly. Unfortinately some of the serverless framework in attempt to make it eays assume a single role for all functions.
-
Logging/Monitoring
Since serverless doesn’t rely on known servers whose logs you can collect and analyze it’s of utmost importance to ensure that all functions provide logging and contain enough context so that they contain what you need to observe security issues.