Why I think that small companies/startups should use serverless architecture.

Mohamed wael Ben ismail
8 min readSep 25, 2021

--

Serverless vs server-based architectures.

This article is divided into four parts.

• We will first respond to the big question “What is serverless architecture?”
• Then, we will tackle the pros and cons of a serverless architecture compared to a server-based one.
• A conclusion about “Why I think that small companies/startups should use a serverless architecture?”
• Finally, a quick tutorial on how to implement a serverless app easily and efficiently.

What is serverless architecture?

In short words, serverless architecture is an architecture in which you don’t manage any servers. It’s the cloud vendor who’s responsible for maintaining, scaling, and securing.
It follows a ‘pay-as-you-go’ plan, which means you are only charged for what you use.
Your only responsibility is to focus on your code.

Why serverless infrastructure? (pros)

Starting only from the definition above we can easily see that :

I) No server management is necessary

Although ‘serverless’ computing does take place on servers, developers never have to deal with the servers. They are managed by the vendor. This can reduce the investment necessary in DevOps — there’s no need to hire server and hardware specialists, which lowers expenses, and it also frees up developers to create and expand their applications without being constrained by server capacity.

II) No server space to worry about.

With a ‘pay-as-you-go’ plan your code only runs when backend functions are needed by the serverless application, and the code automatically scales up as needed. Provisioning is dynamic, precise, and real-time. Some services are so exact that they break their charges down into 100-millisecond increments. In contrast, in a traditional ‘server full architecture, developers have to project in advance how much server capacity they will need and then purchase that capacity, whether they end up using it or not. Here is pricing provided by AWS for their serverless service AWS Lambda

Are you aware of that?($0.20 per 1M requests and $0.0000001667 for every GB-second).

That means If you allocated 512MB of memory to your function, executed it 3 million times in one month, and it ran for 1 second each time, your charges would be calculated as follows:

Monthly compute charges

The monthly compute price is $0.00001667 per GB-s and the free tier provides 400,000 GB-s.

Total compute (seconds) = 3M * (1s) = 3,000,000 seconds

Total compute (GB-s) = 3,000,000 * 512MB/1024 = 1,500,000 GB-s

Total compute — Free tier compute = Monthly billable compute GB- s

1,500,000 GB-s — 400,000 free tier GB-s = 1,100,000 GB-s

Monthly compute charges = 1,100,000 * $0.00001667 = $18.34

That’s a highly cost effective solution.

III) Serverless architectures are inherently scalable

Applications built with a serverless infrastructure will scale automatically as the user base grows or usage increases. If a function needs to be run in multiple instances, the vendor’s servers will start-up, run, and end them as they are needed, often using containers. (The function will start up more quickly if it has been run recently ). As a result, a serverless application will be able to handle an unusually high number of requests just as well as it can process a single request from a single user. A traditionally structured application with a fixed amount of server space can be overwhelmed by a sudden increase in usage.

IV) Easy deployments and updates

Using a serverless infrastructure, there is no need to upload code to servers or do any backend configuration to release a working version of an application. This makes it possible to quickly update, patch, fix, or add new features to an application.
A quick tutorial down below will show you how we can easily deploy an application to serverless infrastructure.

V) Decreasing latency

Your code can run closer to the end-user, Because the application is usually served through a CDN Edge server (computers placed in important junctures between major internet providers in locations across the globe to deliver content as quickly as possible)its code can be run from anywhere, as opposed to server-based applications where the code is hosted on an origin server. This, of course, reduces latency because requests from the user no longer have to travel to an origin server

What are the disadvantages of serverless computing?

Immediately after reading its advantages, you may start thinking that you should use it and it’s probably the best thing to the challenges that you are currently facing. That’s true! however, serverless computing is not a magic bullet for all web applications, and here's why.

I) Runtime

Serverless functions have a limited runtime. Every provider has its limitations on how much a given function can run. On AWS Lambda, for instance, a function can work for 15 minutes. This is because functions, by their design, are short-term processes that shouldn’t take up a lot of RAM.

II) Testing and debugging become more challenging

It is difficult to replicate the serverless environment to see how code will perform once deployed. Debugging is more complicated because developers do not have visibility into backend processes, and because the application is broken up into separate, smaller functions

III) Security and data exposure

When vendors run the entire backend, it may not be possible to fully vet their security, which can especially be a problem for applications that handle personal or sensitive data.

Because companies are not assigned their discrete physical servers, serverless providers will often be running code from several of their customers on a single server at any given time. This issue of sharing machinery with other parties is known as ‘multitenancy’ — think of several companies trying to lease and work in a single office at the same time. Multitenancy can affect application performance and, if the multi-tenant servers are not configured properly, could result in data exposure.

IV) “Cold start” issue

Because it’s not constantly running, serverless code may need to ‘boot up’ when it is used. This startup time may degrade performance. However, if a piece of code is used regularly, the serverless provider will keep it ready to be activated — a request for this ready-to-go code is called a ‘warm start.’ A request for code that hasn’t been used in a while is called a ‘cold start.’
Although there are solutions or ways to decrease this cold start (like giving more CPU at the beginning) it remains a problem to be aware of and must be planned for.

Why do I think that small companies/startups should use a serverless architecture?

Responding to the question that we are all waiting for. I do think that small companies/startups should use serverless architecture for the following reasons :

  1. cost-efficiency: Since the main asset for a startup is their product (application) they should focus only on coding and hire people that are specialized in that field rather than thinking about their servers, their cost, and how to manage them.
    Always remember that if you allocate 512MB of memory to your function, executed it 3 million times in one month, and it ran for 1 second each time, your charge will be $18.34 / month.
  2. Minimizing the use of technologies: There’s no need to set up Ansible, Kubernetes, Grafana, and allthetechnologiesIntheworld. With small steps and a good tutorial (as shown below you can set up your serverless infrastructure.)
  3. There’s no need to worry about runtime: Since the majority of startup applications are CRUD apps (Create, Read, Update, Delete) whose runtime is measured in milliseconds or just seconds, there’s no need to think about the runtime. I can understand your fear for the runtime only if you are working on machine learning models that require a lot of time and computing, otherwise, you can be serverless.

A quick tutorial on how to implement a serverless app easily and efficiently

So, if you have a NodeJs / Python / Ruby / C# or Go app you can move quickly your app into any cloud serverless infrastructure using a framework called “serverless”.

The power of the serverless framework

Since the code deployed on lambda only runs based on events from the AWS ecosystem (e.g. Image is saved in S3 → run resize image code) or triggered from outside (e.g. through the AWS API Gateway), with a combination with other resources (such as DynamoDB, S3, API Gateway, Cognito, …) it becomes very hard orchestrating all those resources by hand. It gets even harder and more time-intensive if you have dozens of Lambda functions, which you want to deploy into multiple regions.
The serverless.com framework solves those pain points. It makes it easy to orchestrate your serverless functions and the used cloud resources.

The Serverless framework is a NodeJs-based CLI tool that will help you orchestrate everything. You can deploy your Lambda functions into multiple regions easily.

How does it work (Python app on AWS Lambda)?

I will use a quick start tutorial. But all the details for working with other languages and cloud providers can be found here. https://www.serverless.com/framework/docs/getting-started

1. Start by installing the Serverless Framework

npm install -g serverless

2. Set up an endpoint

In your project root directory add a file called “serverless.yml”. This file is what the serverless framework understands, from which we will create our endpoints and communicating with other AWS resources.
In this file add the following:

1  functions:
2 hello:
3 handler: handler.hello
4 events:
5 - http:
6 path: hello
7 method: post

This small portion of code, when deployed, will trigger our function hello on a path provided by the serverless framework.

3. Let’s put this into practice.

After adding the code above to the serverless.yml file. We can deploy the service by typing

serverless deploy -v

You can then test your function by typing in your terminal

$ curl -X POST https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/dev/hello

of course, you have to replace the URL in the following curl command with your returned endpoint URL, which you can find in the sls deploy output, to hit your URL endpoint.
As you can see, the URL contains the ‘/dev’. This is because by default the serverless framework deploys to a ‘dev’ environment. You can change the configuration if you want to deploy your code to a ‘prod’ environment.
All the details are in the documentation. I highly recommend you to take a look at it.

Conclusion

In this article, we have seen why, in my point of you, small startups should think about serverless infrastructure by digging into the pros and cons of this infrastructure.
I highly recommend taking a look at the serverless framework.

I Will be waiting for your comments for discussing this point of view and provide me with ideas to write articles about. 🚀🚀🚀

--

--

No responses yet