Going Serverless In AWS

  • Serverless Overview
  • Computing with Lambda
  • Container Overview
  • Containers in ECS or EKS
  • Fargate
  • Amazon Eventbridge

Serverless Overview

Serverless does not mean we do not use computers to run our code. Computers are essential in running code.But in Serverless computing we focus on code and leave the management of the compute architecture behind. Its an extension of the cloud computing where we are used to managing Operating systems and databases without caring about the physical hardware. In serverless computing we are offloading the operating system to AWS and focus only on writing code.

Benefits of going Serverless:

  • Ease of use: There isn’t much for us to do besides bring our code . AWS handles almost everything for us.
  • Event based: Serverless compute is event based , meaning it will not come online until an event is kicked off. It can respond to an event when it’s happening.
  • Pay as you go: You will only pay for your provisioned resources and the length of time. For example, if the code runs for 10 minutes, we will only pay for 10 mins of runtime.

Computing with Lambda

Lambda: AWS Lambda is a serverless compute service that lets you run code without provisioning or managing the underlying servers . It’s like running code without computers. To put in simple terms , we just write we want Lambda to run and AWS provisions the compute resources required to run the code, executes the code and then shuts down the resources when its no longer needed.

How to Build a Lambda function : There are some points you need to remember for building your Lambda function:

Containers Overview

If we look at a very basic definition , container hold things.We can bundle up our code , our applications , our packages, configuration files into a container and then we can pick this container and move anywhere we want.We can run it in Dev , we can run it in prod and everytime it will run the same way, because the contents of the container does not change with environments . It’s a way to standardise the applications.

In containers we do not have to create multiple operatings systems over and over again,and duplicating all the OS’s which does not matter to our applications. With containers we are only including the applications and its dependencies like needing libraries , configuration files effectively creating the own micro-environment which only consists of files and application we need .

There are few important terms we need to know in Containers :

Dockerfile: Text document that contains all the commands or instructions that will be used to build an image.

Image: Image is an immutable file that contains the code, libraries, dependencies, and configuration files needed to run an application.

Registry: The image file which we have created will be saved on registry. Stores Docker images for distribution.They can be both public and private. It’s like a github for your images.

Container: Up until this point we do not have a container yet , all we have is a template.When we download this image and run on an EC2 instances , or our workstations or on premise on the data-centers then this becomes a complete container. It’s easy to call everything a container but essentially the flow is first we write a dockerfile ,from this docker file we build an image and from this this image we upload it to a registry , and then from registry we download the image and run it as a container.

I hope this explains the flow about Containers, but the best way to learn is by doing in action in AWS , AZURE, IBM or GCP cloud.

How to read a docker file:

Docker builds images automatically by reading the instructions from a Dockerfile — a text file that contains all commands, in order, needed to build a given image. A Dockerfile adheres to a specific format and set of instructions which you can find at Dockerfile reference

A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer. Consider this Dockerfile:

# syntax=docker/dockerfile:1
FROM ubuntu:18.04
COPY . /app
RUN make /app
CMD python /app/app.py

Each instruction creates one layer:

  • FROM creates a layer from the ubuntu:18.04 Docker image.
  • COPY adds files from your Docker client’s current directory.
  • RUN builds your application with make.
  • CMD specifies what command to run within the container.

The image you create after running the dockerfile does not do much on its own until we run it as container on the host to see the results of what that image does as a container.

Running Containers in ECS and EKS

Limitations of container : When we think of containers , the main task is to look for a host to have them run on, but if you have thousands of containers, it means you need to have that many EC2 instances to host the containers . To overcome this issue, we have a service available from AWS which is ECS(Elastic Container Service).


  • Management of Containers at scale: ECS can manage hundreds or thousands of containers. It will appropriately place the containers and keep them online. As your applications grows ,ECS will automatically scale the underlying architecture to cope with the increasing load and place the containers inside that architecture without you worrying about it.
  • ELB Integration : Containers are appropriately registered with the load balancers as they come online and go offline.
  • Role Integration: From security point of view you can attach individual AWS roles to containers making security a breeze.
  • Ease of Use: Extremely easy to set up and scale to handle any workload.

Limitations of ECS: This is a service built by AWS for AWS which is the limitation if you want to do cross platform interaction .


Kubernetes is an open source container management and orchestration platform. Originally it was built by Google but its open source for a while now. So this allows to run kubernetes on almost anything like any cloud provider and even your laptop.

  • Open source alternative.
  • Can be used on-premise and in the cloud.
  • There is service available from AWS that is known as Elastic Kubernetes Service(EKS).
When to use ECSWhen to use EKS
Proprietary AWS container management solution.
Best used when all your needs are within AWS and you are looking for something simple.
AWS-managed version of open-source Kubernetes
container management solution . Best used when you are all in on AWS. More work is needed to configure and integrate with AWS.
Difference between ECS and EKS

AWS Fargate:

Fargate service lets us run containers without using Servers. It’s a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service(EKS).

  • AWS owns and manages infrastructure.
  • Fargate does require the use of ECS or EKS.
1. You are responsible for underlying operating system
2. It allows us to build the EC2 pricing model which
helps us to use cost saving methods like reserved
model and spot instances.
3. EC2 is always better for long running applications
which needs the servers to be turned on always.

4.We can have multiple containers share the same host.
1. No operating system access
2.Pay based on resources allocated and time
3.Mainly excels at short running tasks
4.Everything runs completely by itself meaning
isolated environments unlike EC2 instances where
multiple containers share same host..

Fargate vs Lambda

Which one to choose if you are going Serverless

Fargate : Select Fargate when you have more Lambda: Great for shorter and unpredictable workloads
consistent workloads.Usually to standardize
an application.
Allows Docker use across the organisation and Perfect for applications that can be expressed as a Single
a greater level of control by developers. Functions

Amazon EventBridge

Amazon EventBridge(formerly known as CloudWatch events) is a serverless eventbus. It allows you to pass the events from a source to an endpoint. Essentially, it’s the glue that holds your serverless applications together.

Basically , all it does is, if something happened at the source then it will tell endpoint that some event has happened and we need to take an action based on what Serverless service you are using . Example : An API call that happens in AWS can alert a Lambda function , or a variety of different endpoints , that something has happened.


One thought on “Going Serverless In AWS

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: