OAuth 2.0 and OpenID Connect: The Definitive Guide for Developers
A deep dive into the authorization and authentication protocols that power the modern web — from grant types and PKCE flows to implementing social login and hardening your token lifecycle. Written from years of building identity systems in production.
Table of Contents
- Introduction: Authentication vs Authorization
- OAuth 2.0 Fundamentals
- OAuth 2.0 Grant Types
- Authorization Code Flow with PKCE
- OpenID Connect Layer
- Implementing Login with Google/GitHub in Node.js
- Token Management
- Security Best Practices
- OAuth 2.0 vs JWT vs Session Auth
- Common Mistakes and Vulnerabilities
- Conclusion
Introduction: Authentication vs Authorization
Every developer encounters the terms authentication and authorization early in their career, and nearly every developer conflates them at least once. The distinction matters enormously when working with OAuth 2.0 and OpenID Connect, because these two protocols were designed to solve fundamentally different problems, and confusing them leads to architectures that are either insecure or unnecessarily complex.
Authentication answers the question "Who are you?" It is the process of verifying a user's identity. When you type your password into a login form, when you scan your fingerprint, when you enter a one-time code from an authenticator app — all of these are authentication mechanisms. The system is confirming that you are who you claim to be.
Authorization answers the question "What are you allowed to do?" It is the process of determining what resources or actions an authenticated user can access. After the system knows who you are, it needs to decide whether you can read this file, delete that record, or access that API endpoint. Authorization is about permissions and access control.
Here is where things get interesting and where most confusion originates. OAuth 2.0 is an authorization framework. It was designed to let a user grant a third-party application limited access to their resources on another service, without sharing their password. The classic example: you want a photo printing service to access your Google Photos. You do not give the printing service your Google password. Instead, you authorize it through OAuth 2.0 to access only your photos, nothing else.
OAuth 2.0 was never designed to tell the third-party application who you are. It only grants access. This is why OpenID Connect (OIDC) was built on top of OAuth 2.0 — it adds an authentication layer. OIDC introduces the concept of an ID token, which is a signed assertion about the user's identity. When you see a "Sign in with Google" button, that is OpenID Connect in action. The application is not just getting access to your resources; it is learning who you are.
I have seen teams attempt to use raw OAuth 2.0 access tokens to determine user identity. They call the provider's user info endpoint with the access token, extract the email, and treat that as authentication. It works in the happy path, but it misses critical security guarantees that OpenID Connect provides: audience validation, nonce verification, and cryptographic proof that the identity assertion was issued for your specific application. Skipping OIDC and rolling your own identity extraction from OAuth 2.0 is like building a lock and forgetting to install the deadbolt. It looks secure until someone actually tests it.
OAuth 2.0 Fundamentals
OAuth 2.0, defined in RFC 6749, introduces four distinct roles that participate in every authorization flow. Understanding these roles is the foundation for understanding every grant type, every token exchange, and every security consideration that follows.
The Four Roles
Resource Owner: This is typically the end user — the person who owns the data and can grant access to it. When you authorize a third-party app to access your GitHub repositories, you are the resource owner. You own those repositories, and you are granting permission for another application to interact with them on your behalf.
Client: The application that wants to access the resource owner's data. This is the third-party app requesting access. In the GitHub example, the CI/CD tool or project management app requesting access to your repositories is the client. Clients are classified as either confidential (they can securely store a client secret, like a backend server) or public (they cannot, like a single-page application or mobile app).
Authorization Server: The server that authenticates the resource owner and issues access tokens to the client after obtaining authorization. This is the server that displays the "Allow this app to access your account?" consent screen. Google, GitHub, Auth0, Okta, and Keycloak all function as authorization servers. The authorization server is responsible for verifying the resource owner's identity, obtaining their consent, and minting tokens.
Resource Server: The server that hosts the protected resources. It accepts and validates access tokens. When the client presents an access token to the GitHub API to list repositories, the GitHub API is the resource server. It verifies the token and returns the requested data if the token has the necessary scopes.
How the Roles Interact
The typical flow works like this. The client redirects the resource owner to the authorization server. The authorization server authenticates the resource owner (login screen) and asks for consent (permission screen). If the resource owner consents, the authorization server sends an authorization grant back to the client (usually via a redirect). The client exchanges this grant for an access token at the authorization server's token endpoint. The client uses the access token to request resources from the resource server.
This indirection is the core insight of OAuth 2.0. The client never sees the resource owner's credentials. The resource owner never gives their password to the client. The authorization server acts as a trusted intermediary that issues scoped, time-limited tokens. If the client is compromised, the attacker gets an access token with limited scope and limited lifetime, not the user's password.
Scopes
Scopes define the boundaries of the access being granted. When a client requests authorization, it specifies which scopes it needs. The resource owner sees these scopes on the consent screen and can approve or deny them. Common examples include read:user, repo, openid, profile, and email. Scopes are strings that the authorization server and resource server agree on; OAuth 2.0 does not define specific scopes.
# Example authorization request with scopes
https://accounts.google.com/o/oauth2/v2/auth?
client_id=YOUR_CLIENT_ID&
redirect_uri=https://yourapp.com/callback&
response_type=code&
scope=openid%20email%20profile&
state=random_state_value
The principle of least privilege applies directly here. Request only the scopes your application actually needs. If you only need the user's email for authentication, do not request access to their calendar, drive, or contacts. Users notice when an application asks for excessive permissions, and it erodes trust.
OAuth 2.0 Grant Types
OAuth 2.0 defines several grant types (also called flows), each designed for a specific type of client and use case. Choosing the wrong grant type is one of the most common architectural mistakes in OAuth implementations. Here is a detailed comparison of the grant types you will encounter in modern applications.
| Grant Type | Client Type | Use Case | User Interaction | Security Level |
|---|---|---|---|---|
| Authorization Code | Confidential (server-side) | Traditional web apps with a backend | Yes (browser redirect) | High |
| Authorization Code + PKCE | Public (SPA, mobile, CLI) | SPAs, mobile apps, native apps | Yes (browser redirect) | High |
| Client Credentials | Confidential (machine-to-machine) | Service-to-service communication, cron jobs, daemons | No | High |
| Device Code | Public (input-limited devices) | Smart TVs, IoT devices, CLI tools | Yes (on a separate device) | Medium |
| Implicit (deprecated) | Public | Legacy SPAs (do not use for new projects) | Yes | Low |
| Resource Owner Password (deprecated) | Confidential (highly trusted) | Legacy migration only | Yes (direct credential entry) | Low |
Authorization Code Grant
The Authorization Code grant is the most secure interactive flow. It is designed for confidential clients — applications that have a backend server capable of securely storing a client secret. The flow involves two requests: first, the user is redirected to the authorization server to authenticate and consent. The authorization server redirects back with a short-lived authorization code. Then the client's backend server exchanges this code (along with the client secret) for an access token at the token endpoint. The access token is never exposed to the browser.
Authorization Code Grant with PKCE
PKCE (Proof Key for Code Exchange, pronounced "pixy") was originally designed for public clients that cannot securely store a client secret, such as SPAs and mobile apps. However, it is now recommended for all clients, including confidential ones, as an additional layer of defense. We will cover this flow in detail in the next section.
Client Credentials Grant
The Client Credentials grant is used when there is no user involved — the client is acting on its own behalf. A backend service that needs to call another backend service, a cron job that syncs data between systems, or a daemon that processes a queue are all cases for Client Credentials. The client authenticates directly with the authorization server using its client ID and client secret, and receives an access token.
# Client Credentials grant token request
curl -X POST https://auth.example.com/oauth/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=client_credentials" \
-d "client_id=SERVICE_CLIENT_ID" \
-d "client_secret=SERVICE_CLIENT_SECRET" \
-d "scope=read:analytics write:reports"
Device Code Grant
The Device Code grant (RFC 8628) handles devices that have limited input capabilities — smart TVs, gaming consoles, printers, and CLI tools. The device displays a URL and a short user code. The user navigates to that URL on their phone or laptop, enters the code, and authenticates. Meanwhile, the device polls the authorization server until the user completes authentication. If you have ever signed into Netflix on a smart TV by entering a code on your phone, you have used the Device Code flow.
// Device authorization response
{
"device_code": "GmRhmhcxhwAzkoEqiMEg_DnyEysNkuNhszIySk9eS",
"user_code": "WDJB-MJHT",
"verification_uri": "https://auth.example.com/device",
"verification_uri_complete": "https://auth.example.com/device?user_code=WDJB-MJHT",
"expires_in": 1800,
"interval": 5
}
Authorization Code Flow with PKCE
The Authorization Code flow with PKCE is the recommended grant type for virtually all interactive applications in 2026. Whether you are building a single-page application, a mobile app, or a traditional server-rendered web app, PKCE should be part of your flow. Let us walk through every step.
Why PKCE Exists
Without PKCE, the Authorization Code flow has a vulnerability: the authorization code is delivered to the client via a redirect URI, and on mobile platforms or in certain network configurations, an attacker can intercept this redirect and steal the authorization code. If they have the code, they can exchange it for an access token. PKCE prevents this by binding the authorization request to the token exchange request with a cryptographic proof that only the original client could produce.
Step 1: Generate Code Verifier and Code Challenge
The client generates a random string called the code verifier (between 43 and 128 characters). It then computes a code challenge by hashing the verifier with SHA-256 and Base64URL-encoding the result.
// Generate PKCE code verifier and challenge
const crypto = require('crypto');
function generateCodeVerifier() {
return crypto.randomBytes(32).toString('base64url');
// e.g., "dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk"
}
function generateCodeChallenge(verifier) {
return crypto
.createHash('sha256')
.update(verifier)
.digest('base64url');
// e.g., "E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM"
}
const codeVerifier = generateCodeVerifier();
const codeChallenge = generateCodeChallenge(codeVerifier);
// Store codeVerifier securely (session storage, memory, or server-side session)
// You will need it in Step 4
Step 2: Redirect to Authorization Server
The client redirects the user to the authorization server's authorize endpoint. The code challenge is included in the request, but the code verifier is kept secret on the client side.
// Build the authorization URL
const authUrl = new URL('https://auth.example.com/authorize');
authUrl.searchParams.set('response_type', 'code');
authUrl.searchParams.set('client_id', 'YOUR_CLIENT_ID');
authUrl.searchParams.set('redirect_uri', 'https://yourapp.com/callback');
authUrl.searchParams.set('scope', 'openid profile email');
authUrl.searchParams.set('state', crypto.randomBytes(16).toString('hex'));
authUrl.searchParams.set('code_challenge', codeChallenge);
authUrl.searchParams.set('code_challenge_method', 'S256');
// Redirect the user
window.location.href = authUrl.toString();
Step 3: User Authenticates and Consents
The authorization server displays a login screen. The user enters their credentials and approves the requested scopes. The authorization server validates the credentials, records the consent, and redirects the user back to your redirect URI with an authorization code and the state parameter.
# The user is redirected back to:
https://yourapp.com/callback?code=AUTH_CODE_HERE&state=your_state_value
Step 4: Exchange Code for Tokens
The client sends the authorization code along with the code verifier (not the challenge) to the token endpoint. The authorization server hashes the verifier and compares it to the code challenge it received in Step 2. If they match, the server knows this is the same client that initiated the flow, and it issues tokens.
// Exchange authorization code for tokens
async function exchangeCodeForTokens(authorizationCode, codeVerifier) {
const response = await fetch('https://auth.example.com/oauth/token', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'authorization_code',
code: authorizationCode,
redirect_uri: 'https://yourapp.com/callback',
client_id: 'YOUR_CLIENT_ID',
code_verifier: codeVerifier, // The original verifier, NOT the challenge
}),
});
const tokens = await response.json();
// tokens contains: access_token, id_token, refresh_token, expires_in, token_type
return tokens;
}
// Successful token response
{
"access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
"id_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
"refresh_token": "v1.MjQ1NjM4OTcxMjM0...",
"token_type": "Bearer",
"expires_in": 3600,
"scope": "openid profile email"
}
Step 5: Validate the State Parameter
Before exchanging the code, always verify that the state parameter returned by the authorization server matches the one you sent in Step 2. This prevents CSRF attacks where an attacker tricks the user's browser into completing an authorization flow that the attacker initiated.
// Validate state parameter before exchanging code
const urlParams = new URLSearchParams(window.location.search);
const returnedState = urlParams.get('state');
const storedState = sessionStorage.getItem('oauth_state');
if (returnedState !== storedState) {
throw new Error('State mismatch - possible CSRF attack');
}
// State is valid, proceed with code exchange
const code = urlParams.get('code');
const tokens = await exchangeCodeForTokens(code, storedCodeVerifier);
OpenID Connect Layer
OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0. While OAuth 2.0 answers "what can this application access?", OIDC answers "who is this user?" OIDC adds three critical components to OAuth 2.0: the ID token, the UserInfo endpoint, and a set of standard scopes and claims for identity information.
The ID Token
The ID token is a JWT that contains claims about the user's authentication event. Unlike the access token (which is meant for the resource server), the ID token is meant for the client application. It tells the client who the user is, when they authenticated, and how they authenticated.
// Decoded ID token payload
{
"iss": "https://accounts.google.com",
"sub": "110169484474386276334",
"aud": "YOUR_CLIENT_ID",
"exp": 1745600000,
"iat": 1745596400,
"nonce": "a1b2c3d4e5",
"auth_time": 1745596380,
"email": "user@example.com",
"email_verified": true,
"name": "Jane Developer",
"picture": "https://lh3.googleusercontent.com/a/photo.jpg",
"given_name": "Jane",
"family_name": "Developer",
"locale": "en"
}
The ID token must be validated by the client before trusting its claims. This validation involves verifying the JWT signature using the authorization server's public key, checking that the iss claim matches the expected issuer, confirming that the aud claim contains your client ID (this prevents token injection attacks where a token issued for a different client is used against your application), verifying that the exp claim has not passed, and checking the nonce if one was sent in the authorization request.
aud (audience) claim in the ID token. If you skip this check, an attacker could take an ID token issued for a different application (one they control) and use it to authenticate against your application. The aud claim must contain your specific client ID.
OIDC Scopes and Claims
OpenID Connect defines standard scopes that map to sets of user claims:
| Scope | Claims Returned | Description |
|---|---|---|
openid |
sub |
Required for OIDC. Returns the user's unique identifier. |
profile |
name, family_name, given_name, picture, locale, etc. |
Basic profile information. |
email |
email, email_verified |
User's email address and verification status. |
address |
address (structured object) |
User's physical address. |
phone |
phone_number, phone_number_verified |
User's phone number and verification status. |
The openid scope is mandatory. Without it, the authorization server treats the request as a plain OAuth 2.0 request and does not return an ID token. Always include openid when you need identity information.
The UserInfo Endpoint
In addition to the ID token, OIDC provides a UserInfo endpoint that returns claims about the authenticated user. You call this endpoint with the access token to get additional user information that may not be included in the ID token.
// Fetch user info from the UserInfo endpoint
async function getUserInfo(accessToken) {
const response = await fetch('https://auth.example.com/userinfo', {
headers: {
Authorization: `Bearer ${accessToken}`,
},
});
return response.json();
}
// Response:
// {
// "sub": "110169484474386276334",
// "name": "Jane Developer",
// "email": "user@example.com",
// "email_verified": true,
// "picture": "https://..."
// }
Discovery Document
Every OIDC provider publishes a discovery document at /.well-known/openid-configuration. This JSON document contains all the endpoint URLs, supported scopes, supported claims, and other metadata your application needs to interact with the provider. You should fetch this document programmatically rather than hardcoding endpoint URLs.
# Fetch Google's OIDC discovery document
curl https://accounts.google.com/.well-known/openid-configuration
// Key fields in the discovery document
{
"issuer": "https://accounts.google.com",
"authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
"token_endpoint": "https://oauth2.googleapis.com/token",
"userinfo_endpoint": "https://openidconnect.googleapis.com/v1/userinfo",
"jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
"scopes_supported": ["openid", "email", "profile"],
"response_types_supported": ["code", "token", "id_token"],
"subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"]
}
Implementing Login with Google/GitHub in Node.js
Let us build a complete "Login with Google" and "Login with GitHub" implementation in Node.js using Express. We will use the Authorization Code flow with PKCE and handle the full lifecycle: redirect, callback, token exchange, user creation, and session establishment.
Project Setup
npm install express express-session cookie-parser jose
# 'jose' is a modern, lightweight library for JWT/OIDC operations
# No passport.js needed - we'll implement the flow directly
OAuth Provider Configuration
// config/oauth.js
const providers = {
google: {
clientId: process.env.GOOGLE_CLIENT_ID,
clientSecret: process.env.GOOGLE_CLIENT_SECRET,
discoveryUrl: 'https://accounts.google.com/.well-known/openid-configuration',
scopes: ['openid', 'email', 'profile'],
redirectUri: 'https://yourapp.com/auth/google/callback',
},
github: {
clientId: process.env.GITHUB_CLIENT_ID,
clientSecret: process.env.GITHUB_CLIENT_SECRET,
authorizeUrl: 'https://github.com/login/oauth/authorize',
tokenUrl: 'https://github.com/login/oauth/access_token',
userApiUrl: 'https://api.github.com/user',
scopes: ['read:user', 'user:email'],
redirectUri: 'https://yourapp.com/auth/github/callback',
},
};
module.exports = providers;
PKCE Utility Functions
// utils/pkce.js
const crypto = require('crypto');
function generateState() {
return crypto.randomBytes(16).toString('hex');
}
function generateCodeVerifier() {
return crypto.randomBytes(32).toString('base64url');
}
function generateCodeChallenge(verifier) {
return crypto.createHash('sha256').update(verifier).digest('base64url');
}
module.exports = { generateState, generateCodeVerifier, generateCodeChallenge };
Google Login Routes
// routes/auth-google.js
const express = require('express');
const { generateState, generateCodeVerifier, generateCodeChallenge } = require('../utils/pkce');
const providers = require('../config/oauth');
const { createOrUpdateUser, createSession } = require('../services/userService');
const router = express.Router();
const google = providers.google;
// Step 1: Initiate Google login
router.get('/auth/google', (req, res) => {
const state = generateState();
const codeVerifier = generateCodeVerifier();
const codeChallenge = generateCodeChallenge(codeVerifier);
// Store state and verifier in session for validation later
req.session.oauthState = state;
req.session.codeVerifier = codeVerifier;
const authUrl = new URL('https://accounts.google.com/o/oauth2/v2/auth');
authUrl.searchParams.set('client_id', google.clientId);
authUrl.searchParams.set('redirect_uri', google.redirectUri);
authUrl.searchParams.set('response_type', 'code');
authUrl.searchParams.set('scope', google.scopes.join(' '));
authUrl.searchParams.set('state', state);
authUrl.searchParams.set('code_challenge', codeChallenge);
authUrl.searchParams.set('code_challenge_method', 'S256');
authUrl.searchParams.set('access_type', 'offline'); // Request refresh token
authUrl.searchParams.set('prompt', 'consent');
res.redirect(authUrl.toString());
});
// Step 2: Handle Google callback
router.get('/auth/google/callback', async (req, res) => {
const { code, state, error } = req.query;
// Check for errors from the authorization server
if (error) {
console.error('OAuth error:', error);
return res.redirect('/login?error=oauth_denied');
}
// Validate state to prevent CSRF
if (state !== req.session.oauthState) {
return res.status(403).json({ error: 'State mismatch - possible CSRF attack' });
}
try {
// Exchange code for tokens
const tokenResponse = await fetch('https://oauth2.googleapis.com/token', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'authorization_code',
code: code,
redirect_uri: google.redirectUri,
client_id: google.clientId,
client_secret: google.clientSecret,
code_verifier: req.session.codeVerifier,
}),
});
const tokens = await tokenResponse.json();
if (tokens.error) {
throw new Error(tokens.error_description || tokens.error);
}
// Decode and validate the ID token
// In production, verify the signature using Google's JWKS
const idTokenPayload = JSON.parse(
Buffer.from(tokens.id_token.split('.')[1], 'base64url').toString()
);
// Validate audience
if (idTokenPayload.aud !== google.clientId) {
throw new Error('ID token audience mismatch');
}
// Create or update user in your database
const user = await createOrUpdateUser({
provider: 'google',
providerId: idTokenPayload.sub,
email: idTokenPayload.email,
emailVerified: idTokenPayload.email_verified,
name: idTokenPayload.name,
avatar: idTokenPayload.picture,
});
// Create application session
await createSession(req, user);
// Clean up OAuth state from session
delete req.session.oauthState;
delete req.session.codeVerifier;
res.redirect('/dashboard');
} catch (err) {
console.error('Google OAuth error:', err);
res.redirect('/login?error=oauth_failed');
}
});
module.exports = router;
GitHub Login Routes
// routes/auth-github.js
const express = require('express');
const { generateState } = require('../utils/pkce');
const providers = require('../config/oauth');
const { createOrUpdateUser, createSession } = require('../services/userService');
const router = express.Router();
const github = providers.github;
// Step 1: Initiate GitHub login
router.get('/auth/github', (req, res) => {
const state = generateState();
req.session.oauthState = state;
const authUrl = new URL(github.authorizeUrl);
authUrl.searchParams.set('client_id', github.clientId);
authUrl.searchParams.set('redirect_uri', github.redirectUri);
authUrl.searchParams.set('scope', github.scopes.join(' '));
authUrl.searchParams.set('state', state);
res.redirect(authUrl.toString());
});
// Step 2: Handle GitHub callback
router.get('/auth/github/callback', async (req, res) => {
const { code, state, error } = req.query;
if (error) {
return res.redirect('/login?error=oauth_denied');
}
if (state !== req.session.oauthState) {
return res.status(403).json({ error: 'State mismatch' });
}
try {
// Exchange code for access token
const tokenResponse = await fetch(github.tokenUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json',
},
body: JSON.stringify({
client_id: github.clientId,
client_secret: github.clientSecret,
code: code,
redirect_uri: github.redirectUri,
}),
});
const tokens = await tokenResponse.json();
if (tokens.error) {
throw new Error(tokens.error_description || tokens.error);
}
// GitHub does not return an ID token (not full OIDC)
// Fetch user profile from GitHub API
const userResponse = await fetch(github.userApiUrl, {
headers: {
Authorization: `Bearer ${tokens.access_token}`,
Accept: 'application/json',
},
});
const githubUser = await userResponse.json();
// Fetch verified email (GitHub may not include it in profile)
const emailResponse = await fetch('https://api.github.com/user/emails', {
headers: {
Authorization: `Bearer ${tokens.access_token}`,
Accept: 'application/json',
},
});
const emails = await emailResponse.json();
const primaryEmail = emails.find(e => e.primary && e.verified);
const user = await createOrUpdateUser({
provider: 'github',
providerId: String(githubUser.id),
email: primaryEmail?.email || githubUser.email,
emailVerified: primaryEmail?.verified || false,
name: githubUser.name || githubUser.login,
avatar: githubUser.avatar_url,
});
await createSession(req, user);
delete req.session.oauthState;
res.redirect('/dashboard');
} catch (err) {
console.error('GitHub OAuth error:', err);
res.redirect('/login?error=oauth_failed');
}
});
module.exports = router;
User Service
// services/userService.js
async function createOrUpdateUser({ provider, providerId, email, emailVerified, name, avatar }) {
// Look for existing user by provider ID
let user = await db.users.findOne({
where: { provider, providerId },
});
if (user) {
// Update existing user's profile
await user.update({ name, avatar, email, emailVerified });
return user;
}
// Check if a user with this email already exists (account linking)
if (email && emailVerified) {
user = await db.users.findOne({ where: { email } });
if (user) {
// Link this OAuth provider to the existing account
await db.oauthAccounts.create({
userId: user.id,
provider,
providerId,
});
return user;
}
}
// Create new user
return db.users.create({
provider,
providerId,
email,
emailVerified,
name,
avatar,
});
}
async function createSession(req, user) {
req.session.userId = user.id;
req.session.authenticatedAt = Date.now();
}
module.exports = { createOrUpdateUser, createSession };
Token Management
Proper token management is the difference between an OAuth implementation that works in demos and one that holds up in production. You are dealing with three types of tokens, each with different lifetimes, storage requirements, and security considerations.
Access Tokens
Access tokens are the credentials used to access protected resources. They are typically short-lived (minutes to hours) and are included in API requests via the Authorization: Bearer header. Access tokens can be either opaque strings (random tokens that the resource server validates by calling the authorization server) or JWTs (self-contained tokens that the resource server validates locally by checking the signature).
When using JWT access tokens, the resource server can validate the token without any network call to the authorization server. This is the primary advantage in distributed systems. However, it comes with the trade-off that JWT access tokens cannot be truly revoked before expiration without maintaining a blocklist.
// Validating a JWT access token at the resource server
const jose = require('jose');
async function validateAccessToken(token) {
const JWKS = jose.createRemoteJWKSet(
new URL('https://auth.example.com/.well-known/jwks.json')
);
const { payload } = await jose.jwtVerify(token, JWKS, {
issuer: 'https://auth.example.com',
audience: 'https://api.example.com',
algorithms: ['RS256'],
});
return payload; // { sub, scope, exp, iat, ... }
}
Refresh Tokens
Refresh tokens are long-lived credentials used solely to obtain new access tokens when the current one expires. They should never be sent to resource servers. Refresh tokens are always opaque strings (not JWTs) in most implementations, because they need to be tracked server-side for revocation.
The refresh flow is straightforward: when the access token expires, the client sends the refresh token to the authorization server's token endpoint. The authorization server validates the refresh token, checks that it has not been revoked, and issues a new access token (and optionally a new refresh token if rotation is enabled).
// Client-side token refresh logic
class TokenManager {
#accessToken = null;
#expiresAt = 0;
#refreshTimer = null;
setTokens(accessToken, expiresIn) {
this.#accessToken = accessToken;
this.#expiresAt = Date.now() + (expiresIn * 1000);
// Schedule proactive refresh 60 seconds before expiration
clearTimeout(this.#refreshTimer);
const refreshIn = (expiresIn - 60) * 1000;
if (refreshIn > 0) {
this.#refreshTimer = setTimeout(() => this.refresh(), refreshIn);
}
}
async getAccessToken() {
if (!this.#accessToken || Date.now() >= this.#expiresAt) {
await this.refresh();
}
return this.#accessToken;
}
async refresh() {
const response = await fetch('/auth/refresh', {
method: 'POST',
credentials: 'include', // Send the HttpOnly refresh cookie
});
if (!response.ok) {
// Refresh failed - redirect to login
window.location.href = '/login';
return;
}
const { access_token, expires_in } = await response.json();
this.setTokens(access_token, expires_in);
}
}
Token Introspection
Token introspection (RFC 7662) allows a resource server to query the authorization server about the state of a token. This is particularly useful for opaque access tokens, where the resource server cannot validate the token locally, and for checking if a JWT access token has been revoked.
# Token introspection request
curl -X POST https://auth.example.com/oauth/introspect \
-u "resource_server_id:resource_server_secret" \
-d "token=eyJhbGciOiJSUzI1NiIs..."
// Active token response
{
"active": true,
"sub": "user_8f3k2j",
"client_id": "my_app",
"scope": "openid email profile",
"exp": 1745600000,
"iat": 1745596400,
"token_type": "Bearer"
}
// Revoked or expired token response
{
"active": false
}
Token Revocation
Token revocation (RFC 7009) allows clients to notify the authorization server that a token is no longer needed. This is essential for logout flows. When a user logs out, the client should revoke both the access token and the refresh token.
// Revoke tokens on logout
async function logout() {
// Revoke the refresh token
await fetch('https://auth.example.com/oauth/revoke', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
token: refreshToken,
token_type_hint: 'refresh_token',
client_id: 'YOUR_CLIENT_ID',
}),
});
// Clear local state
tokenManager.clear();
window.location.href = '/login';
}
Security Best Practices
OAuth 2.0 and OIDC are secure protocols when implemented correctly. The problem is that the specification gives implementers enough rope to hang themselves. Every vulnerability I have encountered in production OAuth systems was caused by skipping a validation step that the specification explicitly requires. Here are the practices that prevent real attacks.
Always Use PKCE
PKCE is not optional for public clients, and it should not be optional for confidential clients either. PKCE prevents authorization code interception attacks, where a malicious app on the same device intercepts the redirect and steals the authorization code. Without PKCE, a stolen authorization code can be exchanged for tokens. With PKCE, the stolen code is useless because the attacker does not have the code verifier.
Always use the S256 challenge method (SHA-256 hash), never plain. The plain method sends the verifier as the challenge, which defeats the purpose of PKCE entirely.
Validate the State Parameter
The state parameter is your primary defense against CSRF attacks in the OAuth flow. Without it, an attacker can initiate an authorization request, get the authorization code, and then trick a victim's browser into completing the flow — linking the attacker's account to the victim's session. Always generate a cryptographically random state value, store it in the session before redirecting, and verify it matches when the callback arrives.
// Generate and store state
const state = crypto.randomBytes(32).toString('hex');
req.session.oauthState = state;
// On callback, verify state BEFORE exchanging the code
if (req.query.state !== req.session.oauthState) {
throw new Error('CSRF detected: state parameter mismatch');
}
Strict Redirect URI Validation
The redirect URI is one of the most critical security parameters in OAuth 2.0. The authorization server must validate the redirect URI against a pre-registered list using exact string matching. Do not use pattern matching, do not allow subdomain wildcards, and do not allow open redirects.
On the client side, always register the most specific redirect URI possible. Use https://yourapp.com/auth/callback, not https://yourapp.com/. The more specific the redirect URI, the smaller the attack surface.
return_url parameter and redirects users there after login, an attacker can craft an authorization URL that redirects to their malicious site after authentication, potentially stealing the authorization code or tokens from the URL. Always validate that post-login redirects go to your own domain.
Use Short-Lived Access Tokens
Access tokens should expire in minutes, not hours or days. A 5-15 minute lifetime limits the window of exploitation if a token is compromised. Combine short-lived access tokens with refresh token rotation for the best balance of security and user experience. The user never has to re-authenticate (the refresh token handles that silently), but any stolen access token becomes useless quickly.
Store Tokens Securely
For browser-based applications, store access tokens in memory (a JavaScript variable) and refresh tokens in HttpOnly, Secure, SameSite cookies. Never store tokens in localStorage or sessionStorage, as they are accessible to any JavaScript running on the page, including malicious scripts injected through XSS vulnerabilities.
For mobile applications, use the platform's secure storage: Keychain on iOS, EncryptedSharedPreferences or the Keystore on Android. For server-side applications, store tokens encrypted in the database with encryption keys managed by a secrets manager or KMS.
Validate All Token Claims
When you receive an ID token or JWT access token, verify every relevant claim: the signature (using the provider's public key), the iss (issuer must match the expected authorization server), the aud (audience must include your client ID), the exp (token must not be expired), the iat (issued-at should be recent, not from the distant past), and the nonce (if you sent one in the authorization request, it must match). Skipping any of these checks opens specific attack vectors.
OAuth 2.0 vs JWT vs Session Auth
These three approaches to authentication and authorization are frequently compared, but the comparison is misleading because they operate at different layers. OAuth 2.0 is an authorization framework. JWT is a token format. Session authentication is a state management strategy. They are not mutually exclusive, and in practice, most production systems use a combination of all three. That said, understanding the tradeoffs helps you make the right architectural decisions.
| Criteria | Session-Based Auth | JWT Auth | OAuth 2.0 / OIDC |
|---|---|---|---|
| What it is | Server stores session state, client holds a session ID cookie | Stateless token containing claims, signed cryptographically | Delegated authorization framework with standardized flows |
| State | Stateful (server must store session data) | Stateless (token carries all needed data) | Authorization server is stateful; resource servers can be stateless |
| Scalability | Requires shared session store (Redis, DB) for multi-server | Horizontally scalable; no shared state needed | Scalable; authorization server is the only stateful component |
| Revocation | Instant (delete session from store) | Requires token blocklist or waiting for expiration | Revocation endpoint (RFC 7009); refresh tokens are revocable |
| Third-party access | Not designed for it | Possible but non-standard | Primary use case; scoped, delegated access |
| Social login | Requires separate integration per provider | Requires separate integration per provider | Standardized; same flow for any compliant provider |
| Mobile support | Difficult (cookies are awkward on mobile) | Good (tokens sent as headers) | Excellent (PKCE designed for mobile) |
| Implementation complexity | Low | Medium | High (but libraries handle most of it) |
| Best for | Server-rendered web apps, monoliths | APIs, microservices, SPAs | Third-party integrations, SSO, federated identity |
When to Use What
Use session-based auth when you have a server-rendered application with no third-party API access requirements. Rails, Django, Laravel, and Next.js server components all have excellent session management built in. Sessions are simpler, immediately revocable, and battle-tested over decades. If your application is a monolith or a small cluster behind a load balancer, sessions with a Redis store work beautifully.
Use JWT auth when you are building APIs consumed by SPAs or mobile apps, when you have a microservices architecture where multiple services need to verify user identity independently, or when you need to pass claims between services without a centralized session store. JWTs are the practical choice for distributed systems.
Use OAuth 2.0 / OIDC when you need social login ("Sign in with Google"), when third-party applications need access to your users' data, when you are implementing single sign-on across multiple applications, or when you need a standardized protocol that auditors and security teams recognize. OAuth 2.0 is the industry standard for delegated authorization.
Common Mistakes and Vulnerabilities
After reviewing dozens of OAuth implementations in production systems, I have compiled the mistakes that come up repeatedly. Some of these lead to data breaches. Others lead to subtle bugs that only surface under adversarial conditions. All of them are preventable.
1. Using the Implicit Grant for New Applications
The Implicit grant was designed for an era when browsers could not make cross-origin POST requests. It returns the access token directly in the URL fragment, which means the token appears in browser history, in server logs if there is a referrer leak, and in any JavaScript on the page. The Authorization Code flow with PKCE completely replaces the Implicit grant for all client types. If your documentation or tutorial suggests using response_type=token, it is outdated.
2. Not Validating the ID Token Audience
This is arguably the most dangerous mistake in OIDC implementations. If you skip the aud (audience) validation on the ID token, an attacker can register their own application with the same identity provider, authenticate a victim through their malicious application, and then replay the victim's ID token against your application. Your application would accept it because the signature is valid and the issuer is correct. Only the aud check catches this attack.
// VULNERABLE - no audience check
const payload = jwt.decode(idToken);
const user = await findUserByProviderId(payload.sub);
// SECURE - validate audience
const payload = jwt.verify(idToken, publicKey, {
algorithms: ['RS256'],
audience: 'YOUR_CLIENT_ID', // MUST match your registered client ID
issuer: 'https://accounts.google.com',
});
const user = await findUserByProviderId(payload.sub);
3. Storing Tokens in localStorage
Any JavaScript running on your page can read localStorage. This includes scripts loaded from third-party CDNs, analytics tags, ad scripts, browser extensions, and any code injected through an XSS vulnerability. A single XSS vulnerability in your application (or in any of your dependencies) gives the attacker full access to the token. Store tokens in memory and use HttpOnly cookies for refresh tokens.
4. Skipping the State Parameter
Without the state parameter, your OAuth callback is vulnerable to CSRF. An attacker crafts an authorization URL that authorizes their own account, embeds it in an image tag or a hidden iframe on a malicious page, and when the victim visits that page, their browser completes the OAuth flow silently. The attacker's account gets linked to the victim's session. The victim is now using the attacker's account, and anything they enter (messages, payment info, personal data) is accessible to the attacker. This is called a login CSRF attack.
5. Using the Access Token for Authentication
Access tokens are for authorization, not authentication. An access token tells the resource server "this request is authorized to access this resource with these scopes." It does not tell your application "this is user X." If you use the access token to determine user identity (by calling the provider's user info endpoint), you miss the security guarantees that the ID token provides. Always use the ID token for authentication.
6. Not Implementing Token Revocation on Logout
When a user clicks "logout," many implementations simply clear the tokens from the client. But the refresh token (and possibly the access token) are still valid on the authorization server. If an attacker has obtained a copy of the refresh token, they can continue to use it even after the user logs out. Always call the revocation endpoint to invalidate tokens server-side on logout.
7. Overly Broad Scopes
Requesting more scopes than you need is a violation of the principle of least privilege and a liability if your application is compromised. If your app only needs to read a user's email, do not request repo access or admin:org. Users notice when applications ask for excessive permissions, and savvy users will deny authorization. More importantly, if your application is breached, the attacker gains access to everything your tokens are scoped for.
8. Hardcoded Client Secrets
Client secrets committed to source code end up in version control, CI/CD logs, Docker images, and eventually in places you did not intend. Use environment variables or a secrets management service (AWS Secrets Manager, HashiCorp Vault, Doppler). Rotate secrets immediately if they are ever exposed.
9. Missing Redirect URI Validation
If your authorization server allows wildcard redirect URIs or partial matching, an attacker can register a redirect URI that points to their server and steal authorization codes. For example, if you register https://yourapp.com and the server allows any URL starting with that string, https://yourapp.com.evil.com would be accepted. Always use exact string matching for redirect URIs.
10. Not Handling Token Expiration Gracefully
Many implementations treat an expired access token as a fatal error, redirecting the user to login. The correct behavior is to silently refresh the access token using the refresh token and retry the original request. The user should never see a login screen because their access token expired. Implement proactive refresh (refreshing before expiration) and reactive refresh (refreshing when a 401 is received) for a seamless experience.
Conclusion
OAuth 2.0 and OpenID Connect are the foundation of modern authentication and authorization on the web. They are not simple protocols — they have evolved through years of security research, real-world exploits, and hard-won lessons. But once you understand the roles, flows, and security requirements, implementing them correctly becomes a matter of discipline rather than guesswork.
The most important takeaways from this guide are these. First, always use the Authorization Code flow with PKCE for interactive applications, regardless of whether your client is confidential or public. The Implicit grant is dead; do not use it. Second, OpenID Connect is not optional if you need to know who the user is. Do not hack authentication onto plain OAuth 2.0 by calling user info endpoints and hoping for the best. Use ID tokens and validate them properly. Third, validate everything: the state parameter, the audience claim, the issuer, the token signature, the redirect URI. Every validation step you skip is an attack vector you leave open.
On the implementation side, keep your token lifetimes short, implement refresh token rotation, store access tokens in memory and refresh tokens in HttpOnly cookies, and always provide a clean logout flow that revokes tokens server-side. These patterns are not theoretical best practices. They are responses to real attacks that have been used against real applications.
If you are implementing OAuth 2.0 and OIDC for the first time, use a well-maintained library rather than implementing the protocol from scratch. Libraries like jose for Node.js, authlib for Python, and spring-security-oauth2 for Java handle the security-critical parts correctly and stay updated as best practices evolve. You do not want to be writing your own JWT validation logic or PKCE challenge generation in production.
Finally, invest in understanding the security model. Read the OAuth 2.0 Security Best Current Practice (RFC 9700). Run your implementation through the OAuth 2.0 Threat Model (RFC 6819). Test your redirect URI handling, your state parameter validation, and your token lifecycle under adversarial conditions. The protocols are secure when implemented correctly. Your job is to make sure "correctly" describes your implementation.
Decode and inspect OAuth access tokens and ID tokens with our free JWT Decoder.
Open JWT Decoder Tool