Skip to content

HTTP & HTTPS Protocols

The first widely adopted version of HTTP.

Key Characteristics:

  • New TCP connection per request: Every request requires a new TCP handshake (expensive!)
  • No persistent connections: Connection closes after each request/response
  • Simple text-based protocol: Headers and body in plain text
  • No compression: Headers sent uncompressed every time
PlantUML Diagram

Problems:

  • High Latency: Each TCP handshake adds ~100ms round-trip time
  • Server Resource Waste: Opening/closing connections constantly
  • Poor Performance: Loading a page with 50 resources = 50 connections

Major improvements to address HTTP/1.0 inefficiencies.

Key Features:

Reuse the same TCP connection for multiple requests.

PlantUML Diagram

Send multiple requests without waiting for responses (but still has issues).

PlantUML Diagram

Multiple domains on one IP address.

GET /index.html HTTP/1.1
Host: example.com ← Required header
GET /about.html HTTP/1.1
Host: another.com ← Same server, different site

HTTP/1.1 Limitations:

PlantUML Diagram

Workarounds Browsers Use:

  • Open 6-8 parallel TCP connections per domain
  • Still wasteful and limited

Revolutionary changes to solve HTTP/1.1 problems.

Key Features:

PlantUML Diagram

2. Multiplexing (No More Head-of-Line Blocking!)

Section titled “2. Multiplexing (No More Head-of-Line Blocking!)”

Multiple requests/responses over single TCP connection without blocking.

PlantUML Diagram

Headers are compressed using HPACK algorithm.

HTTP/1.1 (Repeated Headers - Wasteful)
─────────────────────────────────────────
Request 1:
User-Agent: Mozilla/5.0 ... (200 bytes)
Authorization: Bearer eyJhbG... (300 bytes)
Cookie: session=abc123... (100 bytes)
Request 2:
User-Agent: Mozilla/5.0 ... (200 bytes) ← Duplicate!
Authorization: Bearer eyJhbG... (300 bytes) ← Duplicate!
Cookie: session=abc123... (100 bytes) ← Duplicate!
Total: 1200 bytes for 2 requests
HTTP/2 (HPACK Compression)
─────────────────────────────────────────
Request 1:
:method: GET
:path: /users
User-Agent: Mozilla/5.0 ... (200 bytes)
Authorization: Bearer eyJhbG... (300 bytes)
[Stored in compression table with index]
Request 2:
:method: GET
:path: /posts
User-Agent: [Reference: Index 62] (2 bytes) ← Compressed!
Authorization: [Reference: Index 63] (2 bytes) ← Compressed!
Total: ~504 bytes for 2 requests (58% savings!)

Server can send resources before client asks for them.

PlantUML Diagram

Tell server which resources are more important.

PlantUML Diagram
PlantUML Diagram
FeatureHTTP/1.0HTTP/1.1HTTP/2
ConnectionNew per requestPersistent (keep-alive)Single multiplexed
Requests/Connection1SequentialParallel (unlimited)
Header Compression✅ (HPACK)
Binary Protocol
Server Push
Stream Priority
Head-of-Line Blocking✅ (worst)✅ (pipelining)❌ (at HTTP level)
Browser SupportLegacyUniversalUniversal (HTTPS only)

HTTPS is HTTP with encryption via TLS/SSL. It’s not a separate protocol version—it’s HTTP running over an encrypted connection.

HTTPS is the secure version of HTTP, used for web browsing, which encrypts data using the TLS protocol. TLS (Transport Layer Security) is a general-purpose cryptographic protocol that secures communications.

HTTPS is a specific application of TLS for websites. Essentially, HTTPS is a web protocol that relies on TLS to encrypt the connection between your browser and a website.

Key Differences from HTTP:

FeatureHTTPHTTPS
Port80443
Encryption❌ None✅ TLS/SSL
Data VisibilityPlaintext (anyone can read)Encrypted (only endpoints can decrypt)
CertificateNot requiredRequired (from CA)
Browser Indicator”Not Secure” warning🔒 Padlock icon
SEO RankingLowerHigher (Google prefers HTTPS)
PlantUML Diagram
PlantUML Diagram

TLS is the encryption protocol that powers HTTPS. It evolved from SSL (Secure Sockets Layer).

Timeline:

  • SSL 1.0 (1994) - Never released
  • SSL 2.0 (1995) - Deprecated (insecure)
  • SSL 3.0 (1996) - Deprecated (POODLE attack)
  • TLS 1.0 (1999) - Based on SSL 3.0
  • TLS 1.1 (2006) - Minor improvements
  • TLS 1.2 (2008) - Still widely used
  • TLS 1.3 (2018) - Current standard (faster, more secure)
PlantUML Diagram
PlantUML Diagram
PlantUML Diagram

Certificate Contents:

Certificate:
Subject: CN=example.com
Issuer: CN=Let's Encrypt Authority X3
Validity:
Not Before: Jan 1 00:00:00 2024 GMT
Not After : Apr 1 00:00:00 2024 GMT
Public Key: RSA 2048 bit
Signature Algorithm: sha256WithRSAEncryption
X509v3 Subject Alternative Name:
DNS:example.com, DNS:*.example.com
PlantUML Diagram
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
│ │ │ │ │ │ │
│ │ │ │ │ │ └─ HMAC algorithm (integrity check)
│ │ │ │ │ └───── Mode of operation (Galois/Counter)
│ │ │ │ └───────────── Encryption algorithm + key size
│ │ │ └───────────────── Symmetric cipher
│ │ └────────────────────── Certificate signature algorithm
│ └──────────────────────────── Key exchange algorithm
└──────────────────────────────── Protocol (TLS)

Modern Recommended Cipher Suites (TLS 1.3):

  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256

Deprecated/Weak (Avoid):

  • Anything with RC4, MD5, DES, 3DES
  • TLS_RSA_* (no forward secrecy)
PlantUML Diagram

TLS Termination is the process of decrypting HTTPS traffic at a proxy/load balancer instead of at the backend application server.

In large-scale applications, handling TLS encryption/decryption directly on application servers can be:

  • CPU-intensive: Encryption/decryption consumes significant CPU resources
  • Certificate management complexity: Managing certificates across many servers
  • Difficult to inspect traffic: Can’t log, monitor, or filter encrypted traffic

Solution: Offload TLS to a dedicated layer (load balancer, reverse proxy, CDN).

PlantUML Diagram
PlantUML Diagram
  1. TLS Termination - Loadbalancer terminates the TLS encryption, and forwards the HTTP (not HTTPS) to the backend servers
  2. TLS Pass through - Loadbalancer just proxies the HTTPS request to the backend servers.
  3. TLS Re-encryption - Loadbalancer decrypts the HTTPS request for logging or filtering purpose, once done, it re-encrypts the current request and sends HTTPS to backend servers.
PlantUML Diagram

1. Cloud Load Balancers (AWS, Azure, GCP)

Internet → AWS ALB (TLS Termination) → EC2 Instances (HTTP)
Certificate managed
by AWS Certificate Manager

2. Kubernetes Ingress

Internet → Nginx Ingress (TLS Termination) → Kubernetes Pods (HTTP)
TLS secret stored
in Kubernetes

3. CDN (Cloudflare, Akamai)

Client → CDN Edge (TLS Termination) → Origin Server (HTTP/HTTPS)
CDN handles TLS
Caches static content
PatternUse CaseProsCons
TLS TerminationMost web applications, APIsSimple, fast, traffic inspectionBackend unencrypted
TLS Pass-ThroughZero-trust networks, complianceEnd-to-end encryptionNo inspection, higher CPU
TLS Re-EncryptionFinancial services, healthcareBest security + inspectionComplex, higher latency

If you simply configured a standard REST API to accept application/x-protobuf instead of application/json, you would only gain the serialization benefits (smaller payload size). However, you would miss out on the architectural and transport advantages that make gRPC a standard for microservices.

Here is why gRPC is more than just "REST with Protobuf."

  1. HTTP/2 Native (The “Hidden” Performance Booster) Most REST APIs still run on HTTP/1.1 (though HTTP/2 is possible, it is not enforced). gRPC is designed strictly for HTTP/2. This difference fundamentally changes how data moves.
  • Multiplexing: In a standard REST (HTTP/1.1) call, if you need to fetch 5 resources, browsers or clients often open 5 separate TCP connections. In gRPC, a single TCP connection is established, and multiple requests/responses are “multiplexed” (sent simultaneously) over that single channel without blocking each other (Head-of-Line Blocking).

  • Header Compression (HPACK): REST APIs send heavy textual headers (User-Agent, Authorization, etc.) with every single request. gRPC compresses these headers efficiently, which significantly reduces overhead for high-frequency internal calls.

  1. Streaming (Beyond Request/Response) Your premise (“making http post calls”) assumes a strict Request-Response model (Client sends one thing, Server sends one thing back).

    • gRPC breaks this paradigm. Because of HTTP/2 framing, gRPC supports:

    • Server-side streaming: Client sends one request, server sends back a stream of 100 updates.

    • Client-side streaming: Client uploads a massive file chunk-by-chunk, server replies once when done.

    • Bidirectional streaming: Both sides send data independently in real-time (like a chat app or stock ticker).

Implementing bidirectional streaming over standard REST usually requires messy workarounds (Long Polling, WebSockets, or Server-Sent Events), whereas in gRPC, it is a first-class citizen.

  1. The “Contract First” Workflow (IDL) If you build a REST API with Protobuf manually, you still have to manually maintain the “translation layer.”
  • REST approach: You write the backend code. Then you write an OpenAPI (Swagger) spec (or vice versa). Then you hope the frontend developer reads the documentation correctly. If you change a field name, the client breaks at runtime.

  • gRPC approach: You define a .proto file first. The gRPC tooling generates the code for both the client and the server.

    • The client function GetUser(id) is generated for you.

    • The serialization/deserialization logic is generated for you.

    • You physically cannot call the API with the wrong parameters because the code won’t compile.

  1. Semantic Differences (Action vs. Resource)
  • REST is Resource-Oriented: It focuses on Nouns. POST /users, GET /users/123. You are constrained by HTTP verbs (GET, POST, PUT, DELETE).

  • gRPC is Action-Oriented (RPC): It focuses on Verbs. It looks like a function call. service.CreateUser(), service.CalculateRoute(). You aren’t forcing your logic to fit into HTTP verbs; you are just calling functions across the network.

PlantUML Diagram
Userservice.protobuf
syntax = "proto3";
service UserService {
rpc GetUser (UserRequest) returns (UserResponse);
rpc ListUsers (Empty) returns (stream UserResponse); // Server streaming
}
message UserRequest {
int32 id = 1;
}
message UserResponse {
int32 id = 1;
string name = 2;
string email = 3;
}
message Empty {}
Client.py
import grpc
import user_pb2
import user_pb2_grpc
# Client sits in your application (could be another microservice)
channel = grpc.insecure_channel('localhost:50051')
stub = user_pb2_grpc.UserServiceStub(channel)
# This looks like a local function call!
request = user_pb2.UserRequest(id=123)
response = stub.GetUser(request)
print(f"User: {response.name}, Email: {response.email}")
# Server streaming example
for user in stub.ListUsers(user_pb2.Empty()):
print(f"Streamed user: {user.name}")
PlantUML Diagram
PlantUML Diagram

REST/JSON Request:

POST /api/users/123 HTTP/1.1
Host: example.com
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
User-Agent: Mozilla/5.0...
{"id": 123, "name": "John Doe", "email": "john@example.com"}

Size: ~350 bytes (headers + JSON)

gRPC/Protobuf Request:

:method: POST
:path: /UserService/GetUser
content-type: application/grpc+proto
[Binary: 0x08 0x7B] // Just 2 bytes for id=123!

Size: ~80 bytes (compressed headers + protobuf)

Browsers have fundamental limitations that prevent native gRPC support:

PlantUML Diagram

Problem: Browsers provide high-level APIs (fetch, XMLHttpRequest) that abstract HTTP/2. gRPC needs direct control over HTTP/2 frames to:

  • Send custom frame types
  • Control flow control windows
  • Manage stream priorities

gRPC relies heavily on HTTP trailers to send metadata after the response body (like error codes, status).

// Normal gRPC response with trailers
HTTP/2 200 OK
content-type: application/grpc+proto
[response data - streaming]
grpc-status: 0 ← Trailer (sent AFTER body)
grpc-message: Success ← Trailer

Browser Issue:

  • fetch() API doesn’t expose trailers in most browsers
  • Even with HTTP/2, trailers are often ignored or inaccessible in JavaScript
PlantUML Diagram

gRPC-Web is a modified protocol that works within browser constraints.

PlantUML Diagram

Envoy Proxy Configuration Example:

http_filters:
- name: envoy.filters.http.grpc_web
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb

Translation:

  1. Request: Base64 protobuf → Binary protobuf
  2. Headers: Browser-safe headers → gRPC headers
  3. Trailers: Extract from body → Put in HTTP/2 trailers
  4. Response: Binary → Base64, Trailers → Body
Section titled “Why gRPC is Not Popular with Web Apps (Client-Side)”
PlantUML Diagram

REST API (Direct from Browser):

// Works everywhere, no setup
const response = await fetch('https://api.example.com/users/123');
const user = await response.json();
console.log(user); // Easy to debug in DevTools

gRPC-Web (Requires Proxy + Code Gen):

// 1. Need to deploy Envoy proxy
// 2. Generate JS stubs from .proto
// 3. Import generated code
import {UserServiceClient} from './generated/user_grpc_web_pb';
import {UserRequest} from './generated/user_pb';
const client = new UserServiceClient('https://api.example.com');
const request = new UserRequest();
request.setId(123);
client.getUser(request, {}, (err, response) => {
console.log(response.toObject()); // Binary, harder to debug
});
Use CaseRecommended
Public API for web appsREST/JSON
Internal microservicesgRPC (native)
Mobile apps (native)gRPC (native)
Real-time dashboards (server streaming)gRPC-Web ⚠️
Simple CRUD operationsREST/JSON
Backend-to-backend (Node.js, Go server)gRPC (native)
Web App Priorities gRPC-Web REST/JSON
─────────────────────────────────────────────────
Simple setup ❌ ✅
Works everywhere ⚠️ ✅
Easy debugging (DevTools) ❌ ✅
CDN caching ❌ ✅
No extra infrastructure ❌ ✅
Human-readable payloads ❌ ✅
Bidirectional streaming ❌ ❌*
Type safety ✅ ⚠️**
* Use WebSockets for real-time bidirectional
** Can add TypeScript types manually

Bottom Line: For browser-based web apps, REST/JSON remains king because it’s simpler and doesn’t require proxy infrastructure. gRPC shines for backend microservices where you control both ends and can use native gRPC.