REST API Design Best Practices: The Complete Guide
Everything you need to design APIs that developers actually want to use — from URL structure and HTTP semantics to versioning, error handling, and production-hardened patterns built from years of shipping real systems.
Table of Contents
- Introduction — Why API Design Matters
- REST Fundamentals
- URL Design and Naming Conventions
- HTTP Methods Deep Dive
- Request and Response Design
- Status Codes
- Authentication and Authorization
- Versioning Strategies
- Error Handling
- Rate Limiting and Throttling
- HATEOAS and Hypermedia
- API Documentation
- Conclusion
Introduction — Why API Design Matters
A well-designed API is one of the most valuable assets a software team can produce. It is not merely a technical interface between systems. It is a contract, a product, and a communication channel all at once. When you design an API, you are making promises to every developer who will integrate with it, every frontend engineer who will consume it, and every future team member who will maintain it. Those promises need to be clear, consistent, and durable. A poorly designed API creates a compounding tax on productivity that accumulates with every new consumer, every new endpoint, and every year it remains in production.
I have worked on APIs that were a joy to consume. The URLs made sense, the responses were predictable, the error messages told you exactly what went wrong, and the documentation matched reality. I have also worked on APIs where every endpoint felt like it was designed by a different person on a different day with a different philosophy. Query parameters that did the same thing but were named differently across endpoints. Responses that returned arrays in one place and objects in another. Status codes that had no correlation with what actually happened. These inconsistencies do not just slow developers down; they erode trust in the entire system.
REST (Representational State Transfer) remains the dominant architectural style for web APIs, and for good reason. It builds on the proven infrastructure of HTTP, it maps naturally to the way we think about data (as resources with operations), and it is universally understood by developers across every language and framework. But REST is an architectural style, not a strict specification. There is no RFC that tells you exactly how to name your endpoints or what your error responses should look like. That freedom is both its strength and its greatest source of inconsistency across the industry.
This guide synthesizes the conventions, patterns, and lessons that have emerged from the collective experience of the API community. These are not theoretical recommendations. They are battle-tested practices drawn from designing APIs that serve millions of requests, from reviewing hundreds of API designs in code review, and from the painful experience of maintaining APIs that got the fundamentals wrong early on. Whether you are designing your first public API or refactoring an internal service, the principles in this guide will help you build something that lasts.
REST Fundamentals
Before diving into specific patterns, it is essential to understand what REST actually requires. Roy Fielding defined REST in his 2000 doctoral dissertation as an architectural style built on six constraints. Most APIs that claim to be "RESTful" only partially implement these constraints, and that is often perfectly fine for practical purposes. But understanding the principles helps you make informed tradeoffs rather than accidental ones.
Resources, Not Actions
The most fundamental concept in REST is that everything is a resource. A resource is any concept that can be addressed and manipulated: a user, an order, a product, a payment, a search result. Resources are identified by URIs (Uniform Resource Identifiers), and you interact with them using a fixed set of HTTP methods. This is the critical mental shift from RPC-style APIs: instead of designing endpoints around verbs (actions the server performs), you design them around nouns (things that exist in your system).
# RPC-style (action-oriented) - avoid this
POST /createUser
POST /getUser
POST /deleteUser
POST /updateUserEmail
# REST-style (resource-oriented) - do this
POST /users # Create a user
GET /users/42 # Get a user
DELETE /users/42 # Delete a user
PATCH /users/42 # Update a user's fields
The REST approach is powerful because it leverages the existing semantics of HTTP. Every developer already knows what GET, POST, PUT, and DELETE mean. By mapping your operations to these standard methods, you make your API instantly more understandable and predictable. The HTTP specification already defines the behavior of each method — whether it is safe, whether it is idempotent, how caching should work — and your API inherits all of that for free.
Statelessness
Every request from client to server must contain all the information needed to understand and process the request. The server does not store any client context between requests. This means no server-side sessions that track "where the client is" in a workflow. Each request is independent and self-contained. Authentication tokens, pagination cursors, filter parameters — everything needed to fulfill the request travels with the request itself.
Statelessness is what makes REST APIs horizontally scalable. Any server in your cluster can handle any request because no server has special knowledge about any particular client. This is why JWT tokens and API keys work so well with REST — they carry the authentication context with every request, eliminating the need for server-side session storage.
Uniform Interface
REST relies on a uniform interface between components. In practice, this means using standard HTTP methods with their defined semantics, identifying resources with URIs, manipulating resources through representations (typically JSON), and including self-descriptive messages that contain all the metadata needed to process them (content type headers, cache directives, etc.). The uniform interface constraint is what makes REST APIs interoperable. Any HTTP client — curl, Postman, a browser, a Python script — can interact with any REST API without special tooling or protocol negotiation.
URL Design and Naming Conventions
URLs are the most visible part of your API. They are the first thing a developer sees in documentation, the first thing they type into Postman, and the thing they will remember (or struggle to remember) months later when they need to integrate again. Good URL design is not about aesthetics. It is about creating a predictable, discoverable structure that communicates the shape of your data model through the URL itself. A developer should be able to guess what an endpoint does by reading its URL, and they should be able to predict the URL for a resource they have not seen yet based on the patterns established by the endpoints they have already used.
Core Rules
- Use nouns, not verbs. URLs identify resources. The HTTP method provides the verb.
/usersnot/getUsers. - Use plural nouns for collections.
/users,/orders,/products. This is consistent whether you are referring to the collection or a single item within it:/users(all users) and/users/42(one user). - Use lowercase with hyphens. URLs are case-sensitive in some servers. Use
kebab-case:/order-itemsnot/orderItemsor/order_items. - Nest resources to show relationships.
/users/42/ordersmeans "orders belonging to user 42". Limit nesting to two levels deep to avoid unwieldy URLs. - Never include file extensions. Use content negotiation via the
Acceptheader instead./users/42not/users/42.json. - No trailing slashes. Pick one convention and enforce it. Most APIs omit trailing slashes and redirect if one is provided.
Good vs Bad URL Examples
| Bad URL | Good URL | Why |
|---|---|---|
GET /getUsers |
GET /users |
HTTP method already implies the action |
POST /createOrder |
POST /orders |
POST on a collection means "create" |
GET /user/42 |
GET /users/42 |
Collections should always be plural |
GET /Users/42/Orders |
GET /users/42/orders |
Use lowercase consistently |
DELETE /deleteUser?id=42 |
DELETE /users/42 |
Resource ID belongs in the path, not query |
GET /users/42/orders/5/items/3/reviews |
GET /order-items/3/reviews |
Avoid deep nesting; flatten when possible |
POST /users/42/activate |
PATCH /users/42 {"status":"active"} |
Prefer state changes via resource updates |
GET /searchProducts?q=laptop |
GET /products?q=laptop |
Search is filtering a collection, not a separate action |
Handling Actions That Do Not Map to CRUD
Sometimes you have operations that genuinely do not map cleanly to creating, reading, updating, or deleting a resource. Sending an email, running a report, or triggering a deployment are examples. For these cases, there are two pragmatic approaches. First, you can model the action as a resource: instead of POST /users/42/send-welcome-email, create a POST /emails with a body that specifies the template and recipient. Second, for truly procedural operations, it is acceptable to use a verb-based sub-resource like POST /reports/monthly/generate. The key is to reserve this pattern for exceptional cases and prefer resource-oriented design as the default.
notification resource. If you are processing a payment, maybe you create a payment resource. This approach produces better audit trails, makes the operation idempotent, and keeps your API consistent.
HTTP Methods Deep Dive
HTTP methods are the verbs of your API. Each method has specific semantics defined by the HTTP specification (RFC 9110), and respecting those semantics is non-negotiable for a well-designed API. When consumers see a GET request, they expect it to be safe (no side effects). When they see a PUT request, they expect it to be idempotent (repeating it produces the same result). Violating these expectations creates subtle bugs that are extremely difficult to diagnose, especially when intermediaries like CDNs, proxies, and browsers make assumptions based on HTTP method semantics.
GET — Retrieve a Resource
GET requests retrieve a representation of a resource. They must be safe (no side effects on the server) and idempotent (calling it multiple times produces the same result). GET requests should never create, update, or delete data. They are cacheable by default, and browsers, CDNs, and proxies will cache GET responses aggressively unless told otherwise. Never use GET requests for operations that modify state, even if it seems convenient to encode parameters in the URL.
// Express.js route handler
app.get('/api/users/:id', async (req, res) => {
const user = await User.findById(req.params.id);
if (!user) {
return res.status(404).json({
error: { code: 'USER_NOT_FOUND', message: 'User not found' }
});
}
res.json({ data: user });
});
// Client-side fetch
const response = await fetch('/api/users/42');
const { data: user } = await response.json();
POST — Create a Resource
POST requests create a new resource within a collection. The request body contains the representation of the new resource. POST is neither safe nor idempotent — calling it twice typically creates two resources. The server should respond with 201 Created and include a Location header pointing to the newly created resource. Returning the created resource in the response body saves the client a subsequent GET request and is the convention followed by most modern APIs.
app.post('/api/users', async (req, res) => {
const { name, email, role } = req.body;
const user = await User.create({ name, email, role });
res.status(201)
.location(`/api/users/${user.id}`)
.json({ data: user });
});
// Client-side
const response = await fetch('/api/users', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'Jane Doe', email: 'jane@example.com', role: 'editor' })
});
PUT — Replace a Resource
PUT replaces the entire resource with the representation in the request body. It is idempotent: sending the same PUT request multiple times produces the same result as sending it once. This means the client must send the complete resource representation, not just the fields that changed. If a field is omitted from a PUT request, it should be set to its default value or null. PUT is the right choice when the client has the full, authoritative representation of the resource and wants to replace what is on the server with exactly that.
app.put('/api/users/:id', async (req, res) => {
const { name, email, role, bio } = req.body;
// Replace the entire resource - all fields must be provided
const user = await User.findByIdAndReplace(req.params.id, {
name, email, role, bio
});
if (!user) {
return res.status(404).json({
error: { code: 'USER_NOT_FOUND', message: 'User not found' }
});
}
res.json({ data: user });
});
PATCH — Partially Update a Resource
PATCH applies a partial modification to a resource. Unlike PUT, the client only sends the fields that need to change. PATCH is the method most developers actually want when they say "update." It is not guaranteed to be idempotent (though it often is in practice), which is an important distinction from PUT. Use PATCH when you want to update one or two fields without having to send the entire resource.
app.patch('/api/users/:id', async (req, res) => {
// Only update the fields that were provided
const updates = {};
if (req.body.name !== undefined) updates.name = req.body.name;
if (req.body.email !== undefined) updates.email = req.body.email;
if (req.body.role !== undefined) updates.role = req.body.role;
const user = await User.findByIdAndUpdate(req.params.id, updates, { new: true });
if (!user) {
return res.status(404).json({
error: { code: 'USER_NOT_FOUND', message: 'User not found' }
});
}
res.json({ data: user });
});
DELETE — Remove a Resource
DELETE removes a resource. It is idempotent: deleting a resource that has already been deleted should not produce an error (though opinions differ on whether to return 204 or 404 in this case). The response typically has no body (204 No Content). For soft-delete systems where you mark records as inactive rather than physically removing them, DELETE is still the appropriate method — the resource is no longer available through the API, which is what matters from the consumer's perspective.
app.delete('/api/users/:id', async (req, res) => {
const user = await User.findByIdAndDelete(req.params.id);
if (!user) {
return res.status(404).json({
error: { code: 'USER_NOT_FOUND', message: 'User not found' }
});
}
res.status(204).send();
});
Method Summary
| Method | CRUD | Safe | Idempotent | Request Body | Typical Response |
|---|---|---|---|---|---|
GET |
Read | Yes | Yes | No | 200 + resource |
POST |
Create | No | No | Yes | 201 + created resource |
PUT |
Replace | No | Yes | Yes | 200 + updated resource |
PATCH |
Update | No | No* | Yes | 200 + updated resource |
DELETE |
Delete | No | Yes | No | 204 No Content |
*PATCH is not guaranteed to be idempotent by the HTTP spec, though most JSON merge-patch implementations are idempotent in practice.
Request and Response Design
Consistent request and response structures are the foundation of a usable API. When every endpoint returns data in the same shape, client developers can write generic handling code once and apply it everywhere. When the structure varies from endpoint to endpoint, every integration becomes a special case that requires reading the documentation, writing custom parsing logic, and hoping that the next endpoint follows the same pattern. Consistency in your response envelope is arguably more important than any individual design decision within it.
Envelope vs Flat Responses
There are two schools of thought on response structure. Flat responses return the resource directly as the top-level JSON object. Envelope responses wrap the resource in a container object that provides metadata alongside the data. Both approaches have merit, but for non-trivial APIs, I strongly recommend the envelope pattern because it provides a consistent location for pagination metadata, error information, and other context without polluting the resource itself.
// Flat response (simple, but limited)
{
"id": 42,
"name": "Jane Doe",
"email": "jane@example.com"
}
// Envelope response (recommended for production APIs)
{
"data": {
"id": 42,
"name": "Jane Doe",
"email": "jane@example.com"
},
"meta": {
"requestId": "req_abc123",
"timestamp": "2026-04-25T10:30:00Z"
}
}
Collection Responses with Pagination
Any endpoint that returns a list of resources must support pagination. Unbounded list endpoints are a ticking time bomb. The collection that has 50 items today will have 50,000 items next year, and someone's mobile app will try to load all of them at once. There are three common pagination strategies, and each has tradeoffs that matter depending on your data characteristics and access patterns.
Offset-based pagination is the simplest: the client specifies a page number or offset and a limit. It is easy to implement and allows jumping to arbitrary pages. However, it has a critical flaw: if data is inserted or deleted between requests, items can be skipped or duplicated. It also performs poorly for large offsets because the database still has to scan past all the skipped rows.
// GET /api/users?page=2&limit=20
app.get('/api/users', async (req, res) => {
const page = parseInt(req.query.page) || 1;
const limit = Math.min(parseInt(req.query.limit) || 20, 100); // Cap at 100
const offset = (page - 1) * limit;
const [users, total] = await Promise.all([
User.find().skip(offset).limit(limit),
User.countDocuments()
]);
res.json({
data: users,
pagination: {
page,
limit,
total,
totalPages: Math.ceil(total / limit),
hasNext: page * limit < total,
hasPrev: page > 1
}
});
});
Cursor-based pagination uses an opaque cursor (typically a Base64-encoded identifier) that points to the last item in the current page. The server returns the cursor, and the client includes it in the next request to fetch items after that point. This approach provides stable results even when data changes between requests and performs consistently regardless of how deep into the dataset you are. The tradeoff is that you cannot jump to an arbitrary page. Cursor-based pagination is the right choice for feeds, timelines, and any dataset that changes frequently.
// GET /api/users?cursor=eyJpZCI6NDJ9&limit=20
{
"data": [
{ "id": 43, "name": "Alice" },
{ "id": 44, "name": "Bob" }
],
"pagination": {
"limit": 20,
"nextCursor": "eyJpZCI6NjJ9",
"hasNext": true
}
}
Filtering, Sorting, and Field Selection
Collection endpoints should support filtering, sorting, and field selection via query parameters. These features reduce bandwidth usage, minimize client-side processing, and allow consumers to get exactly the data they need in a single request. A consistent pattern across all collection endpoints makes the API predictable and reduces the learning curve for new consumers.
# Filtering - use field names as query parameters
GET /api/users?role=admin&status=active
# Sorting - use a sort parameter with field name and direction
GET /api/orders?sort=-created_at # Descending by creation date
GET /api/products?sort=price,-rating # Ascending price, then descending rating
# Field selection - specify which fields to return
GET /api/users?fields=id,name,email
# Combined
GET /api/orders?status=shipped&sort=-created_at&fields=id,total,shipped_at&limit=10
?limit=999999 to return the entire dataset. This protects both your server and your consumers from accidental denial-of-service.
Status Codes
HTTP status codes are the first piece of information a client receives about the result of their request, and they should be meaningful. A well-chosen status code tells the client what happened before they even parse the response body. A poorly chosen one forces them to read the body, guess at the meaning, and write special-case handling for ambiguous situations. The HTTP specification defines dozens of status codes, but you only need about fifteen of them for a well-designed API. Using the correct code for each situation allows client libraries, monitoring tools, and intermediary proxies to handle responses correctly without custom logic.
Essential Status Codes Reference
| Code | Name | When to Use |
|---|---|---|
200 |
OK | Successful GET, PUT, PATCH, or DELETE that returns data |
201 |
Created | Successful POST that creates a new resource |
204 |
No Content | Successful DELETE or PUT/PATCH with no response body |
301 |
Moved Permanently | Resource has been permanently moved to a new URL |
304 |
Not Modified | Conditional GET where resource has not changed (ETag/If-None-Match) |
400 |
Bad Request | Malformed request syntax, invalid JSON, or validation errors |
401 |
Unauthorized | Missing or invalid authentication credentials |
403 |
Forbidden | Authenticated but insufficient permissions for this resource |
404 |
Not Found | Resource does not exist at this URL |
405 |
Method Not Allowed | HTTP method not supported for this endpoint (e.g., DELETE on a read-only resource) |
409 |
Conflict | Request conflicts with current state (e.g., duplicate email, version mismatch) |
422 |
Unprocessable Entity | Syntactically valid request but semantically invalid (e.g., email format invalid) |
429 |
Too Many Requests | Rate limit exceeded. Include Retry-After header. |
500 |
Internal Server Error | Unexpected server error. Log the details, return a generic message. |
503 |
Service Unavailable | Server temporarily unable to handle the request (maintenance, overload) |
200 OK for every response and embed the actual status in the response body (e.g., {"success": false, "error": "Not found"}). This breaks HTTP semantics entirely. Monitoring tools cannot distinguish errors from successes. CDNs cannot cache appropriately. Retry logic cannot determine whether a request is safe to retry. Always use the correct HTTP status code.
400 vs 422: The Practical Distinction
The difference between 400 Bad Request and 422 Unprocessable Entity confuses many developers. Use 400 when the request is syntactically malformed — invalid JSON, missing required headers, or a request body that cannot be parsed. Use 422 when the request is syntactically valid JSON but fails business validation — an email address that does not match the expected format, a quantity that is negative, or a date range where the start is after the end. In practice, many APIs use 400 for both cases, and that is acceptable. The important thing is to be consistent and to include a descriptive error message in the response body regardless of which code you choose.
Authentication and Authorization
Authentication (who are you?) and authorization (what are you allowed to do?) are distinct concerns, but they work together in every API. Choosing the right authentication mechanism depends on who your consumers are, how sensitive your data is, and what level of granularity you need for access control. There are three primary approaches for REST APIs, and understanding the tradeoffs between them is essential for making the right choice for your system.
API Keys
API keys are the simplest form of authentication. The consumer includes a unique key with every request, typically via a custom header or query parameter. API keys are best suited for server-to-server communication where you need to identify and rate-limit consumers but do not need user-level authentication. They should never be used for user-facing applications because they do not represent a user — they represent an application or integration.
# API key in a custom header (recommended)
GET /api/products HTTP/1.1
Host: api.example.com
X-API-Key: sk_live_a1b2c3d4e5f6g7h8i9j0
# API key as a query parameter (less secure - logged in URLs)
GET /api/products?api_key=sk_live_a1b2c3d4e5f6g7h8i9j0
// API key middleware
function apiKeyAuth(req, res, next) {
const apiKey = req.headers['x-api-key'];
if (!apiKey) {
return res.status(401).json({
error: { code: 'API_KEY_MISSING', message: 'X-API-Key header is required' }
});
}
const client = await ApiKey.findByKey(apiKey);
if (!client || client.revoked) {
return res.status(401).json({
error: { code: 'API_KEY_INVALID', message: 'Invalid or revoked API key' }
});
}
req.client = client;
next();
}
Bearer Tokens (JWT)
Bearer token authentication uses the Authorization header with the Bearer scheme. The token is typically a JWT that contains the user's identity and permissions, signed by the server. This is the standard approach for user-facing applications and SPAs. The client obtains a token by authenticating with credentials (username and password, social login, etc.) and includes the token with subsequent requests. Bearer tokens are the most common authentication mechanism for modern REST APIs.
# Bearer token authentication
GET /api/users/me HTTP/1.1
Host: api.example.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
// Client-side token management
class ApiClient {
constructor(baseUrl) {
this.baseUrl = baseUrl;
this.accessToken = null;
}
async login(email, password) {
const response = await fetch(`${this.baseUrl}/auth/login`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email, password }),
credentials: 'include' // Send cookies for refresh token
});
const data = await response.json();
this.accessToken = data.accessToken;
}
async request(path, options = {}) {
const response = await fetch(`${this.baseUrl}${path}`, {
...options,
headers: {
...options.headers,
'Authorization': `Bearer ${this.accessToken}`,
'Content-Type': 'application/json'
}
});
if (response.status === 401) {
await this.refreshToken();
return this.request(path, options); // Retry with new token
}
return response.json();
}
}
OAuth 2.0
OAuth 2.0 is an authorization framework designed for delegated access. It is the right choice when third-party applications need access to your users' data (e.g., a third-party analytics tool accessing your users' order data with their permission), when you need federated identity (Sign in with Google, GitHub, etc.), or when you need fine-grained, revocable scopes that users can control. OAuth 2.0 is more complex than API keys or simple bearer tokens, but the complexity is justified when you need the security properties it provides. For most first-party applications (your frontend talking to your backend), a simpler JWT-based approach is sufficient.
Versioning Strategies
API versioning is how you evolve your API without breaking existing consumers. Every successful API will eventually need breaking changes — a field renamed, a response structure reorganized, an endpoint deprecated and replaced. Without a versioning strategy, you are trapped: every change risks breaking someone's integration, and the fear of breaking changes leads to API stagnation where you accumulate technical debt rather than improving the design. The question is not whether to version your API, but how.
Strategy Comparison
| Strategy | Example | Pros | Cons |
|---|---|---|---|
| URL Path | /api/v1/users |
Highly visible, easy to understand, easy to route, cacheable | Not truly RESTful (resource URL changes), proliferates URL paths |
| Custom Header | X-API-Version: 2 |
Clean URLs, resource identity preserved | Not visible in browser, easy to forget, harder to test |
| Accept Header | Accept: application/vnd.api+json;v=2 |
Most RESTful, follows content negotiation | Complex, difficult to test manually, less tooling support |
| Query Parameter | /api/users?version=2 |
Easy to use, visible, does not change base URL | Clutters query string, breaks caching, conflates versioning with filtering |
Recommendation: URL Path Versioning
For most APIs, URL path versioning (/api/v1/users, /api/v2/users) is the pragmatic choice. It is immediately visible in every request, trivial to route at the infrastructure level (load balancers, API gateways), and universally understood by developers. The theoretical objection that it changes the resource's identity is valid from a strict REST perspective but rarely matters in practice. GitHub, Stripe, Twilio, and the majority of successful public APIs use URL path versioning. The practical benefits far outweigh the theoretical purity of content negotiation.
// Express.js version routing
const express = require('express');
const app = express();
const v1Router = require('./routes/v1');
const v2Router = require('./routes/v2');
app.use('/api/v1', v1Router);
app.use('/api/v2', v2Router);
// Redirect unversioned requests to the latest stable version
app.use('/api/users', (req, res) => {
res.redirect(301, `/api/v2${req.path}`);
});
Versioning Best Practices
- Avoid versioning if possible. Use additive changes (new fields, new endpoints) that do not break existing consumers. Only bump the version for genuine breaking changes.
- Support at least two versions simultaneously. When you release v2, continue supporting v1 for a deprecation period (typically 6-12 months for public APIs).
- Communicate deprecation clearly. Use the
DeprecationandSunsetHTTP headers. Document timelines. Send emails to API key owners. - Keep a changelog. Every API change, breaking or not, should be documented with the date, the change, and migration instructions if applicable.
Error Handling
Error handling is where API design meets reality. In the happy path, every API looks great. It is the error cases that reveal the quality of the design. When something goes wrong, your consumer needs three things: a machine-readable code they can match on programmatically, a human-readable message they can display or log, and enough context to understand what went wrong and how to fix it. A consistent error format across your entire API is non-negotiable. If each endpoint returns errors in a different shape, every consumer has to write custom error handling for every endpoint, and the debugging experience becomes a nightmare.
Consistent Error Response Format
// Standard error response format
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Must be a valid email address",
"value": "not-an-email"
},
{
"field": "age",
"message": "Must be a positive integer",
"value": -5
}
],
"requestId": "req_abc123def456"
}
}
// Not found error
{
"error": {
"code": "RESOURCE_NOT_FOUND",
"message": "User with ID 42 not found",
"requestId": "req_xyz789ghi012"
}
}
// Authentication error
{
"error": {
"code": "TOKEN_EXPIRED",
"message": "Access token has expired. Please refresh your token.",
"requestId": "req_mno345pqr678"
}
}
// Rate limit error
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests. Please retry after 30 seconds.",
"retryAfter": 30,
"requestId": "req_stu901vwx234"
}
}
Error Handling Middleware
// Centralized error handling in Express
class AppError extends Error {
constructor(statusCode, code, message, details = null) {
super(message);
this.statusCode = statusCode;
this.code = code;
this.details = details;
}
}
// Error handler middleware (must have 4 parameters)
function errorHandler(err, req, res, next) {
const requestId = req.headers['x-request-id'] || crypto.randomUUID();
// Known application errors
if (err instanceof AppError) {
return res.status(err.statusCode).json({
error: {
code: err.code,
message: err.message,
details: err.details,
requestId
}
});
}
// Unexpected errors - log full details, return generic message
console.error(`[${requestId}] Unhandled error:`, err);
res.status(500).json({
error: {
code: 'INTERNAL_ERROR',
message: 'An unexpected error occurred. Please try again later.',
requestId
}
});
}
// Usage in route handlers
app.get('/api/users/:id', async (req, res, next) => {
try {
const user = await User.findById(req.params.id);
if (!user) {
throw new AppError(404, 'USER_NOT_FOUND', `User ${req.params.id} not found`);
}
res.json({ data: user });
} catch (err) {
next(err);
}
});
Rate Limiting and Throttling
Rate limiting protects your API from abuse, prevents accidental denial-of-service from misbehaving clients, and ensures fair usage across all consumers. Without rate limiting, a single client can monopolize your server resources, degrade performance for everyone else, and potentially bring down your entire system. Every production API needs rate limiting, no exceptions. The question is how to implement it in a way that is fair, transparent, and easy for consumers to work with.
Rate Limit Headers
The standard practice is to communicate rate limit information via response headers so clients can adapt their behavior proactively rather than hitting the wall and backing off reactively. While there is an IETF draft standard (RateLimit header fields), many APIs use the de facto standard headers popularized by GitHub and Twitter.
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000 # Maximum requests per window
X-RateLimit-Remaining: 847 # Requests remaining in current window
X-RateLimit-Reset: 1714089600 # Unix timestamp when the window resets
Retry-After: 30 # Seconds to wait (only on 429 responses)
// Rate limiting middleware using a sliding window
const rateLimit = require('express-rate-limit');
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minute window
max: 1000, // Limit each IP/key to 1000 requests per window
standardHeaders: true, // Return rate limit info in RateLimit-* headers
legacyHeaders: true, // Also return X-RateLimit-* headers
keyGenerator: (req) => {
// Rate limit by API key if present, otherwise by IP
return req.headers['x-api-key'] || req.ip;
},
handler: (req, res) => {
res.status(429).json({
error: {
code: 'RATE_LIMIT_EXCEEDED',
message: 'Too many requests. Please slow down.',
retryAfter: Math.ceil(req.rateLimit.resetTime / 1000)
}
});
}
});
app.use('/api/', apiLimiter);
Tiered Rate Limits
Production APIs typically implement tiered rate limits based on the consumer's plan or role. Free-tier consumers might be limited to 100 requests per minute, while enterprise consumers might have 10,000 requests per minute. Different endpoints may also have different limits — a search endpoint that hits the database hard should have a stricter limit than a simple lookup endpoint. Communicate these tiers clearly in your documentation and in the rate limit headers so consumers always know where they stand.
HATEOAS and Hypermedia
HATEOAS (Hypermedia as the Engine of Application State) is the constraint that most "REST" APIs ignore entirely. In a truly RESTful API, the server guides the client through available actions by including hyperlinks in responses. Instead of the client hardcoding URLs and knowing the API structure in advance, it discovers available operations dynamically by following links. The concept is identical to how you navigate a website: you do not memorize URLs, you follow links from one page to another.
// Response with HATEOAS links
{
"data": {
"id": 42,
"name": "Jane Doe",
"email": "jane@example.com",
"status": "active"
},
"links": {
"self": { "href": "/api/v1/users/42", "method": "GET" },
"update": { "href": "/api/v1/users/42", "method": "PATCH" },
"delete": { "href": "/api/v1/users/42", "method": "DELETE" },
"orders": { "href": "/api/v1/users/42/orders", "method": "GET" },
"deactivate": { "href": "/api/v1/users/42/deactivate", "method": "POST" }
}
}
// Collection response with pagination links
{
"data": [ ... ],
"links": {
"self": { "href": "/api/v1/users?page=2&limit=20" },
"first": { "href": "/api/v1/users?page=1&limit=20" },
"prev": { "href": "/api/v1/users?page=1&limit=20" },
"next": { "href": "/api/v1/users?page=3&limit=20" },
"last": { "href": "/api/v1/users?page=5&limit=20" }
},
"pagination": {
"page": 2,
"limit": 20,
"total": 97
}
}
The Pragmatic View
Full HATEOAS implementation is rare in the wild. Most API consumers — especially frontend developers working with a known backend — prefer hardcoded URLs that they can construct client-side. Building a generic HATEOAS client that discovers and navigates links dynamically is significantly more complex than calling known endpoints, and the benefits are marginal when the API and client are maintained by the same team.
That said, there are specific cases where hypermedia links add real value. Pagination links eliminate the need for clients to construct URLs and handle edge cases around page boundaries. State-dependent action links (showing "deactivate" only for active users) communicate available operations based on the current state of the resource. Workflow-driven APIs where the next steps depend on the current state (order processing, approval workflows) benefit enormously from HATEOAS because the server controls the state machine and tells the client what is possible at each step.
API Documentation
An API without documentation is an API that nobody will use correctly. Documentation is not an afterthought or a nice-to-have. It is a core component of your API product. The best API documentation is generated from a machine-readable specification, kept in sync with the actual implementation, includes runnable examples, and covers both the happy path and the error cases. The industry standard for REST API documentation is the OpenAPI Specification (formerly known as Swagger), and there is no compelling reason to use anything else.
OpenAPI Specification
The OpenAPI Specification (OAS) is a language-agnostic standard for describing REST APIs. You write a YAML or JSON file that describes your endpoints, request and response schemas, authentication methods, and error formats. From that specification, you can generate interactive documentation, client SDKs, server stubs, and automated tests. The specification serves as the single source of truth for what your API does and how to use it.
# openapi.yaml (excerpt)
openapi: 3.1.0
info:
title: User Management API
version: 1.0.0
description: API for managing user accounts
paths:
/api/v1/users:
get:
summary: List all users
parameters:
- name: page
in: query
schema:
type: integer
default: 1
- name: limit
in: query
schema:
type: integer
default: 20
maximum: 100
responses:
'200':
description: Paginated list of users
content:
application/json:
schema:
type: object
properties:
data:
type: array
items:
$ref: '#/components/schemas/User'
pagination:
$ref: '#/components/schemas/Pagination'
post:
summary: Create a new user
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUserRequest'
responses:
'201':
description: User created successfully
/api/v1/users/{id}:
get:
summary: Get a user by ID
parameters:
- name: id
in: path
required: true
schema:
type: integer
responses:
'200':
description: User found
'404':
description: User not found
components:
schemas:
User:
type: object
properties:
id:
type: integer
name:
type: string
email:
type: string
format: email
role:
type: string
enum: [admin, editor, viewer]
createdAt:
type: string
format: date-time
Documentation Best Practices
- Write descriptions for every endpoint, parameter, and schema field. Do not rely on the name alone. A field called
statuscould mean anything — document the possible values and what each one means. - Include request and response examples. Developers read examples first and definitions second. Make sure every endpoint has at least one complete example request and response.
- Document error responses. Every endpoint should list the possible error codes and what triggers them. This is the part most documentation gets wrong or skips entirely.
- Keep the spec in version control. The OpenAPI spec should live alongside your code and be updated in the same pull request that changes the API. Stale documentation is worse than no documentation because it actively misleads consumers.
- Use tools like Swagger UI or Redoc to generate interactive documentation from your OpenAPI spec. These tools let developers try API calls directly from the documentation page, which drastically reduces the time to first successful integration.
Conclusion
Designing a good REST API is not about memorizing a list of rules. It is about building empathy for the developers who will consume your API and making decisions that respect their time, their mental model, and their need for consistency. The principles in this guide — resource-oriented URLs, correct HTTP method usage, consistent response envelopes, meaningful status codes, clear error messages, transparent rate limiting, and living documentation — are not arbitrary conventions. They are patterns that have emerged from decades of collective experience building and consuming APIs at scale.
The most important takeaway from this guide is consistency. A consistently designed API with slightly unconventional choices is vastly better than an API that follows every best practice but does so inconsistently. If you decide to use camelCase for JSON fields, use it everywhere. If you decide to return 400 for all validation errors instead of distinguishing between 400 and 422, that is fine — just do it consistently. Developers can adapt to any convention quickly as long as it is predictable. What they cannot adapt to is an API where every endpoint is a surprise.
Start with the fundamentals: get your URL structure right, use HTTP methods correctly, and establish a consistent response format. Then layer on authentication, versioning, and rate limiting as your needs grow. Document everything from day one, not because documentation is fun, but because undocumented behavior is undefined behavior — every consumer will interpret it differently, and you will spend more time answering support questions than you ever would have spent writing documentation. Build your API as if the next person to use it will be someone you have never met, because in most cases, it will be.
APIs are products. Treat them with the same rigor you would apply to any user-facing product: design intentionally, test thoroughly, iterate based on feedback, and never ship breaking changes without a migration path. The investment you make in good API design today pays dividends for years in reduced integration friction, fewer support tickets, and a developer community that recommends your platform because it is a pleasure to work with.
Format and validate your API responses with our free JSON Formatter.
Open JSON Formatter Tool