You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Serverless computing is a cloud execution model where the cloud provider dynamically manages the allocation of machine resources. The term "serverless" does not mean that servers are absent — it means that developers no longer need to think about them. You write code, deploy it, and the provider handles provisioning, scaling, patching, and capacity planning.
Traditionally, running a web application or backend API required you to:
Serverless eliminates these concerns. You deploy a function or container, define a trigger, and the cloud provider does the rest.
| Characteristic | Description |
|---|---|
| No server management | You never provision, patch, or maintain any servers |
| Event-driven execution | Functions run in response to events — HTTP requests, queue messages, file uploads, schedules |
| Automatic scaling | The platform scales from zero to thousands of concurrent executions with no configuration |
| Pay-per-use billing | You are charged only for the compute time your code actually consumes, measured in milliseconds |
| Ephemeral compute | Each invocation is stateless — the execution environment may be reused but is never guaranteed |
Understanding where serverless fits requires comparing it with other hosting models.
+-------------------------------+
| Your Application |
|-------------------------------|
| Runtime (Node, Python) |
|-------------------------------|
| Operating System |
|-------------------------------|
| Virtual Machine / EC2 |
|-------------------------------|
| Physical Hardware |
+-------------------------------+
You manage: Everything above the hardware
Provider manages: Physical hardware
+-------------------------------+
| Your Application |
|-------------------------------|
| Container Image (Dockerfile) |
|-------------------------------|
| Container Orchestrator |
|-------------------------------|
| Physical Hardware |
+-------------------------------+
You manage: Application + Container definition
Provider manages: Orchestration, OS, hardware
+-------------------------------+
| Your Function Code |
|-------------------------------|
| Managed Runtime + Platform |
|-------------------------------|
| Physical Hardware |
+-------------------------------+
You manage: Function code only
Provider manages: Everything else
As you move down this spectrum, you give up control but gain operational simplicity. Serverless sits at the far end — maximum abstraction, minimum operations.
Serverless is not a single technology; it is a design philosophy. AWS offers a range of serverless services:
| Category | Serverless Service | Purpose |
|---|---|---|
| Compute | AWS Lambda | Run code without provisioning servers |
| API | Amazon API Gateway | Create, publish, and manage APIs |
| Storage | Amazon S3 | Object storage with event notifications |
| Database | Amazon DynamoDB | Fully managed NoSQL database |
| Messaging | Amazon SQS, SNS | Message queues and pub/sub notifications |
| Orchestration | AWS Step Functions | Coordinate multi-step workflows |
| Event Bus | Amazon EventBridge | Serverless event routing |
| Streaming | Amazon Kinesis Data Firehose | Real-time data streaming |
When you combine these services, you build entire applications without managing a single server.
Serverless pricing follows a consumption-based model. For AWS Lambda specifically:
Monthly cost = (Number of requests x Price per request)
+ (GB-seconds consumed x Price per GB-second)
Example calculation:
| Metric | Value |
|---|---|
| Requests per month | 10,000,000 |
| Average duration | 200 ms |
| Memory allocated | 256 MB |
| Price per request | $0.20 per million |
| Price per GB-second | $0.0000166667 |
Request charges: 10,000,000 / 1,000,000 x $0.20 = $2.00
Compute charges: 10,000,000 x 0.2s x 0.25 GB x $0.0000166667 = $8.33
Total: $10.33/month
Compare this with an always-on t3.medium EC2 instance at ~$30/month that sits idle most of the time.
Free tier: AWS Lambda includes 1 million free requests and 400,000 GB-seconds per month — enough for many development and low-traffic production workloads.
Serverless is an excellent fit for:
Not every workload belongs on serverless:
| Scenario | Why Serverless Struggles |
|---|---|
| Long-running processes (> 15 minutes) | Lambda has a maximum timeout of 15 minutes |
| High-performance computing | Limited CPU and memory ceilings |
| Consistent high-throughput | Always-on containers may be cheaper at sustained scale |
| Stateful applications | Each invocation is stateless; external state stores are required |
| Vendor lock-in concerns | Serverless services are tightly coupled to the provider |
The decision is not binary. Many production systems use a hybrid approach — serverless for event-driven components, containers or VMs for long-running or stateful workloads.
A cold start occurs when the serverless platform must create a new execution environment for your function. This involves:
Cold start timeline:
|--- Platform init ---|--- Runtime init ---|--- Handler execution ---|
~100-500 ms ~50-200 ms Your code runs
After the first invocation, the execution environment is reused for subsequent requests — this is called a warm start, which skips steps 1 and 2.
Factors that affect cold start duration:
Security responsibilities shift significantly in a serverless model:
| Responsibility | Traditional (EC2) | Serverless (Lambda) |
|---|---|---|
| Physical security | Provider | Provider |
| OS patching | You | Provider |
| Runtime updates | You | Provider |
| Application code | You | You |
| IAM permissions | You | You |
| Data encryption | You | You |
| Input validation | You | You |
You still own application-level security — input validation, least-privilege IAM roles, and encryption at rest and in transit.