Serverless Framework - sample CRUD app using Node.js and Express.js

Artur Bartosik
7 min readNov 3, 2020


In the first post in this series, we explained what the Serverless Framework is for and what problems it can solve. In the second article, we went through the setup of the environment and launched hello-world functions in AWS. It’s time for probably the last article in this series, where we will analyze a more interesting and a bit more complex example of a stack built with the Serverless Framework. Perhaps in the future, I will try to supplement this series with an article in which I will show project migrations from AWS to GCP. At the moment, let’s face it, Serverless Framework is best suited for working with AWS. In the case of Google Cloud and other vendors, the list of resources that we can create and manage with the Serverless Framework is much less. Nevertheless, doing such an exercise could prove interesting.

Let’s come back to the topic of the article. Our stack will implement simple CRUD functionalities. It will be a REST API capable of saving, editing, and deleting books from the database. We will use AWS Lambda, API Gateway, DynamoDB, and Bucket S3. I chose Node.js as a development environment and used the Express.js framework. Thanks to the use of Express.js as an HTTP router, we can use a different approach to building applications than the classic one with one Lambda per one endpoint. More on that later

Launching and deploying Serverless Framework project

At the outset, I would like to emphasize that to run the project you will need to install Serverless Framework and configure role for aws cli. For a description of this process, see the previous article in the series. You can download the project code that I prepared from my GitHub repository

After downloading the sources, all we need to do is get the Node.js dependencies and then run the deploy using the Serverless Framework.

npm install
sls deploy

A moment of patience is needed to complete the deployment. The process of building our package is now taking place, followed by the creation and launch of the CloudFormation template. This template is based on the serverless.yml configuration file. When this process is over, our stack has been created and executed. You can preview in the CloudFormation panel what resources have been created.

As we can see, the CloudFormation template created required resources for us, tables in DynamoDB, bucket S3, API Gateway, and most importantly Lambda with our function code.

You can now test the program by calling publicly exposed endpoints via API Gateway. AWS has created a temporary domain for you for this purpose. You can find it in the API Gateway tab or in the sls deploy command output.

As I mentioned, the function code is a simple Express.js-based application that provides an API for managing books. Available functionalities below

curl -X GET ‘https://<domena>/books’
curl -X GET ‘https://<domena>/books/<id>’
# Returns all books saved in DynamoDB or a book with the given id
curl -X POST ‘https://<domena>/books’ \
— header ‘Content-Type: application/json’ \
— data-raw ‘{
“bookId”: “1”,
“bookName”: “The Hobbit”,
“author”: “Tolkien”
# Saves the book to DynamoDB
curl -X PUT ‘https://<domena>/books/<id>’ \
— header ‘Content-Type: application/json’ \
— data-raw ‘{
“bookName”: “The Lord of the Rings”,
“author”: “Tolkien”
# Updates the book with the given id
curl -X DELETE ‘https://<domena>/books/<id>’
# Deletes the book from DynamoDB

Tips for working with Serverless Framework

Deployment time may take up to several minutes (depending on the complexity of the project). However, we do not have to execute full deployment every time we make changes to the code. In this case, we only need to push a new package with the code to Lambda. To do this, just run the command below

sls deploy -f books
# books is the function name

If we only change the Lambda configuration, this command will also work. However, if we change the configuration of other components in the serverless.yml file, it will be necessary to perform a full deployment so that the CloudFormation stack also updates our resources in AWS.

There is one more interesting feature that speeds up work with Serverless Framework. It’s about being able to run (emulate) our Lambda functions on a local computer. We have this possibility thanks to the serverless-offline plugin. Just execute the command below and our functions will be run on the local Lambda emulator and will be available on localhost thanks to API Gateway emulator. Your function will also have access to other cloud resources such as DynamoDB located in AWS. Remember, however, that when you run the function locally, it uses the IAM role attached to aws cli and not the one that was created through CloudFormation.

sls offline

Once again, I remind you to clean up your resources in the cloud at the end of your work or experiments. However, you don’t have to do it manually. Just run the command below and our stack will be roll-backed and removed. Remember that the Lambda created here will be publicly available on the internet. In such a situation, you would not want someone to make a Denial of Wallet attack on your infrastructure

sls remove

The structure of the serverless.yml file

Now let’s take a closer look, step by step, at the main file in which we configure our Serverless Framework project. In my example project, I included only a part of the possible configuration. Serverless Framework provides a much larger number of components to configure (we’re talking about AWS here, for other vendors, this list is much less).

service: serverless-books
- serverless-offline

tableName: books
tableKey: bookId
useChildProcesses: true

In the first field, we name the project. This name will be inherited, for example, by the Lambda function, IAM role, bucket S3 of our deployment. Below we have a defined list of plugins. Here it is only the previously mentioned serverless-offline. Here you will find the official repository of other interesting plugins
Below the custom field, just for example, I showed that we can define constant parameters that we can refer to in our configuration files.

name: aws
runtime: nodejs12.x
region: eu-west-1
lambda: true
- Effect: Allow
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: 'arn:aws:dynamodb:eu-west-1:*:*'
TABLE_NAME: ${self:custom.tableName}
BUCKET_NAME: 'serverless-books-dev-bucket-name'

In this section, we configure the most important information. cloud provider, development environment, and region. Optionally, we can add tags for resources and enable the AWS X-Ray service, which allows us to analyze and debug our application.
We also configure the IAM role for the Lambda function here. We have the option to create a new role as in this case or connect an existing role. In the environment section, we can define values ​​that we can pass directly to the code.

A little extra at this point. As you can see, I defined a constant here called BUCKET_NAME. See the resources/s3.yml file for a definition of this bucket. It is the bucket that is used in the application. When you add a book to DynamoDB and then save the file <book_id> .jpg to this bucket, then when you query endpoint books/<book_id>, you will get json with an additional url field. The application inserts the generated presignedUrl for a given image from S3 into this field. However, remember that the name of your bucket will be different. You have to replace it here!

handler: functions/books.handler
memorySize: 512
timeout: 10
- httpApi:
method: "*"
path: /books
- httpApi:
method: "*"
path: /books/{params}

- ${file(resources/dynamoDb.yml)}
- ${file(resources/s3.yml)}

Under the functions section, we have the configuration of our Lambda. This is only a basic configuration, you can check more configuration parameters in the documentation.

Under the events field, we define what will trigger our function. In our case, Lambda is exposed via API Gateway using HTTP API. HTTP API is by definition a simpler, faster, and cheaper Gateway option than REST API. However, if you wanted to verify it yourself, I left the commented REST API configuration in the code.

At this point, I need to describe the approach I used in the application. As you have noticed, our application consists of one function. If you’ve dealt with serverless before, you may have heard that a single function should perform one functionality. In our case, however, we have one function that carries out all CRUD operations. This is due to the use of Express.js, which navigates HTTP requests inside the function. Based on the HTTP method, it determines which function to perform. I decided to take this approach to show you that you shouldn’t always stick to the 1 Lambda = 1 functionality rule. In the case of simple operations, in my opinion, it is good to place certain functionalities within a single Lambda (e.g. those performing operations within the same domain object). This has its obvious downsides. We only scale one Lambda and only one Lambda can be configured. For example, in the 1 Lambda = 1 operation approach, knowing that the system will have more reads than writes, we could allocate more memory for Lambda triggered by a GET request.

Under the resources field, there are references to separate files, which may also contain the definition of resources that we want to create. In this case, it is DynamoDb and S3. We could just as well define each Lambda in a separate file.


Serverless Framework is a really interesting and useful tool that is worth looking at if you are interested in working with Lambda and Serverless itself. I hope this project will be further developed and improved. It would be nice to see more options for other cloud vendors in the future. This is what I wish you and myself



Artur Bartosik

DevOps & Serverless Enthusiast. AWS, GCP, and K8S certified. Home page: