Environment Variables and Secrets Management: A Developer's Guide

Everything you need to know about keeping configuration out of code and secrets out of version control — from .env files on your laptop to production-grade secrets managers handling thousands of deployments. Built from hard lessons learned in production.

18 min read

Table of Contents

  1. Introduction: The $50,000 Mistake
  2. What Are Environment Variables?
  3. The .env File Pattern
  4. Environment Variables in Node.js
  5. Environment Variables in Frontend Apps
  6. Docker and Environment Variables
  7. Secrets Management Tools
  8. CI/CD Secrets
  9. The .gitignore Essentials
  10. Security Best Practices
  11. Conclusion

Introduction: The $50,000 Mistake

A few years ago, a developer on a team I was consulting for pushed a commit at 11 PM on a Friday. Nothing unusual — a small hotfix to a payment processing service. The change was three lines. But buried in the diff was a file that should never have been committed: a .env file containing the production Stripe API secret key, the database connection string with full admin credentials, and an AWS access key with S3 and SES permissions.

The repository was public. Within 14 hours, an automated bot had scraped the key from GitHub, spun up over 200 EC2 instances for cryptocurrency mining, and started sending thousands of emails through SES. By the time the team noticed on Saturday afternoon, the AWS bill had crossed $50,000. Stripe flagged the account for suspicious activity. The database had been accessed by an unknown IP address. The incident response took three full days: rotating every credential, auditing every database table for data exfiltration, notifying affected users, and filing reports with AWS support to dispute the fraudulent charges.

This is not a hypothetical scenario. Credential leaks through version control are one of the most common and most preventable security incidents in software development. GitHub reports that it detects over 100 million secret leaks per year through its secret scanning program. Researchers at North Carolina State University found that thousands of new API keys and cryptographic secrets are leaked on GitHub every single day. The vast majority are never intentionally exposed — they are committed by accident, by developers who did not understand how environment variables and secrets management should work.

This Is Not a Junior Developer Problem: In post-mortem analyses, the most damaging credential leaks often come from experienced engineers working under time pressure. A quick debug session where you hardcode a connection string, a config file copied between projects that still contains production keys, a Docker image built with secrets baked into an early layer — these mistakes happen at every experience level. The defense is systemic, not individual.

This guide covers the entire landscape of environment variables and secrets management. We will start with the fundamentals of what environment variables actually are at the operating system level, work through the patterns and tools you need for local development, cover the specific challenges of frontend applications and Docker containers, compare production-grade secrets management tools, and finish with the security practices that prevent the kind of incident I described above. Whether you are setting up your first project or hardening a production deployment, you will walk away with actionable knowledge you can apply immediately.

What Are Environment Variables?

Environment variables are key-value pairs maintained by the operating system that are available to any process running in that environment. They are one of the oldest configuration mechanisms in computing, dating back to Unix in the 1970s. Every process on your system — from your shell to your web server to your Node.js application — has access to a set of environment variables inherited from its parent process.

You interact with environment variables every day, even if you do not realize it. Your PATH variable tells the shell where to find executable programs. HOME points to your user directory. LANG determines your locale settings. These are all environment variables that were set when your shell session started, and every command you run inherits them.

# View all environment variables
env

# View a specific variable
echo $HOME
# /Users/yourname

# Set a variable for the current session
export DATABASE_URL="postgresql://localhost:5432/myapp"

# Set a variable for a single command only
DATABASE_URL="postgresql://localhost:5432/testdb" npm test

# Unset a variable
unset DATABASE_URL

The key characteristic of environment variables is that they are external to your application code. Your application reads them at runtime but does not define them. This separation is the foundation of the Twelve-Factor App methodology, which states that configuration that varies between deploys (staging, production, developer environments) should be stored in environment variables, not in code.

In Node.js, environment variables are accessible through the process.env object:

// Every environment variable is a string
console.log(process.env.NODE_ENV);      // "production"
console.log(process.env.PORT);          // "3000" (string, not number!)
console.log(process.env.DATABASE_URL);  // "postgresql://..."

// Check if a variable exists
if (!process.env.DATABASE_URL) {
  throw new Error('DATABASE_URL is required');
}

// Common gotcha: boolean-like values are still strings
console.log(process.env.DEBUG);           // "true" (string)
console.log(process.env.DEBUG === true);  // false!
console.log(process.env.DEBUG === 'true'); // true
Everything Is a String: Environment variables have no type system. Every value is a string. The number 3000, the boolean true, and the JSON object {"key": "value"} are all plain strings when read from process.env. Your application is responsible for parsing and validating these values, which is why config validation (covered later in this guide) is so important.

The reason environment variables became the standard for application configuration is their universality. Every programming language, every operating system, every container runtime, and every cloud platform supports them. A Python application, a Go service, and a Node.js API can all read DATABASE_URL from the environment without any shared configuration library or file format. This makes environment variables the lowest common denominator for passing configuration to software — and that simplicity is exactly why they have endured for over fifty years.

The .env File Pattern

While environment variables are the standard for production configuration, setting dozens of them manually in your terminal every time you start a development session is impractical. The .env file pattern solves this by storing environment variables in a file that is loaded automatically when your application starts. The file uses a simple KEY=value format, one variable per line.

# .env - Local development configuration
NODE_ENV=development
PORT=3000
DATABASE_URL=postgresql://dev_user:dev_password@localhost:5432/myapp_dev
REDIS_URL=redis://localhost:6379

# API Keys
STRIPE_SECRET_KEY=sk_test_4eC39HqLyjWDarjtT1zdp7dc
SENDGRID_API_KEY=SG.xxxxxxxxxxxxxxxxxxxxxxxx

# Feature Flags
ENABLE_NEW_CHECKOUT=true
RATE_LIMIT_PER_MINUTE=100

The .env file was popularized by the dotenv library (originally for Ruby, then ported to virtually every language). The convention is straightforward: the file lives in your project root, it is never committed to version control, and it contains values specific to your local development environment.

The .env File Family

Most projects end up with multiple .env files, each serving a different purpose. Here is the standard convention:

File Committed? Purpose
.env No Local development defaults. Each developer has their own copy.
.env.example Yes Template with all required variables but no real values. Documents what the app needs.
.env.local No Local overrides. Higher priority than .env. For personal settings.
.env.development Sometimes Shared development defaults. May be committed if values are non-sensitive.
.env.production No Production values. Should ideally not exist — use a secrets manager instead.
.env.test Sometimes Test environment defaults. Often committed with safe test-only values.

The .env.example file is the most important of these. It serves as living documentation of every environment variable your application requires, complete with descriptions and example values. When a new developer joins the team, they copy .env.example to .env and fill in their local values. This single file eliminates the "it works on my machine" problem caused by missing configuration.

# .env.example - Committed to version control
# Copy this file to .env and fill in the values

# Server Configuration
NODE_ENV=development
PORT=3000

# Database - Create a local PostgreSQL database named myapp_dev
DATABASE_URL=postgresql://USER:PASSWORD@localhost:5432/myapp_dev

# Redis - Used for session storage and caching
REDIS_URL=redis://localhost:6379

# Stripe - Get test keys from https://dashboard.stripe.com/test/apikeys
STRIPE_SECRET_KEY=sk_test_xxxxxxxxxxxx
STRIPE_WEBHOOK_SECRET=whsec_xxxxxxxxxxxx

# SendGrid - Create a free account at https://sendgrid.com
SENDGRID_API_KEY=SG.xxxxxxxxxxxx
FROM_EMAIL=dev@localhost
Pro Tip: Add a setup script that validates the developer's .env file against .env.example and reports any missing variables. Run it as part of npm run dev or as a git hook. This catches configuration drift early, before it causes confusing runtime errors.

Environment Variables in Node.js

Node.js has evolved significantly in how it handles environment variables. For years, the dotenv package was the only option. Starting with Node.js 20.6, there is now built-in support for .env files. Let us cover both approaches and the critical step most developers skip: config validation.

The dotenv Package

The dotenv package reads a .env file and sets the variables on process.env. It is the most widely used approach with over 35 million weekly downloads on npm.

npm install dotenv
// Load .env at the very top of your entry file
require('dotenv').config();

// Or with ES modules
import 'dotenv/config';

// Now process.env contains your .env values
console.log(process.env.DATABASE_URL);

For more control, you can specify options:

require('dotenv').config({
  path: '.env.local',           // Custom file path
  override: true,               // Override existing env vars
  debug: process.env.DEBUG,     // Log parsing details
});

Node.js Built-in --env-file Flag

Starting with Node.js 20.6, you can load .env files without any dependency:

# Load .env file natively
node --env-file=.env server.js

# Load multiple files (later files override earlier ones)
node --env-file=.env --env-file=.env.local server.js
{
  "scripts": {
    "dev": "node --env-file=.env --env-file=.env.local server.js",
    "start": "node server.js"
  }
}

The built-in flag has one important advantage: it loads variables before your application code runs, which means they are available even during module initialization. With dotenv, there is a brief window during startup where the variables are not yet loaded, which can cause issues with modules that read process.env at import time.

Config Validation with Zod

Here is the step that separates professional applications from fragile ones: validate your configuration at startup. Do not scatter process.env.SOMETHING calls throughout your codebase and hope for the best. Parse and validate all environment variables in a single place, fail fast if anything is missing or invalid, and export a typed configuration object that the rest of your application uses.

// config/env.ts
import { z } from 'zod';

const envSchema = z.object({
  // Server
  NODE_ENV: z.enum(['development', 'production', 'test']).default('development'),
  PORT: z.coerce.number().int().min(1).max(65535).default(3000),

  // Database
  DATABASE_URL: z.string().url().startsWith('postgresql://'),
  DATABASE_POOL_SIZE: z.coerce.number().int().min(1).max(100).default(10),

  // Redis
  REDIS_URL: z.string().url().startsWith('redis://').optional(),

  // External APIs
  STRIPE_SECRET_KEY: z.string().startsWith('sk_'),
  STRIPE_WEBHOOK_SECRET: z.string().startsWith('whsec_'),
  SENDGRID_API_KEY: z.string().startsWith('SG.'),

  // Auth
  JWT_SECRET: z.string().min(64, 'JWT_SECRET must be at least 64 characters'),
  JWT_EXPIRES_IN: z.string().default('15m'),

  // Feature Flags
  ENABLE_NEW_CHECKOUT: z.coerce.boolean().default(false),
  RATE_LIMIT_PER_MINUTE: z.coerce.number().int().min(1).default(60),
});

// Parse and validate - throws on failure
const parsed = envSchema.safeParse(process.env);

if (!parsed.success) {
  console.error('Invalid environment variables:');
  console.error(parsed.error.format());
  process.exit(1);
}

export const env = parsed.data;

// TypeScript now knows the exact shape and types:
// env.PORT is number (not string!)
// env.NODE_ENV is 'development' | 'production' | 'test'
// env.ENABLE_NEW_CHECKOUT is boolean
// env.REDIS_URL is string | undefined

Now the rest of your application imports env instead of accessing process.env directly:

// database.ts
import { env } from './config/env';

// Fully typed, validated, and with correct types
const pool = new Pool({
  connectionString: env.DATABASE_URL,    // string, guaranteed to be a valid PostgreSQL URL
  max: env.DATABASE_POOL_SIZE,           // number, guaranteed to be between 1 and 100
});
Why This Matters: Without validation, a missing DATABASE_URL does not crash your application at startup — it crashes 30 seconds later when the first database query runs, with a cryptic error like "Cannot read properties of undefined." With Zod validation, you get a clear error message listing every missing or invalid variable the moment the application starts. This alone saves hours of debugging time over the life of a project.

Environment Variables in Frontend Apps

Frontend environment variables are fundamentally different from backend environment variables, and misunderstanding this difference is one of the most common sources of security incidents in web development. The critical distinction is this: frontend environment variables are embedded into your JavaScript bundle at build time and shipped to every user's browser. They are not secret. They cannot be secret. They are public by definition.

Every major frontend framework enforces this by requiring a specific prefix for variables that should be included in the client bundle:

Framework Prefix Access
Vite VITE_ import.meta.env.VITE_API_URL
Next.js NEXT_PUBLIC_ process.env.NEXT_PUBLIC_API_URL
Create React App REACT_APP_ process.env.REACT_APP_API_URL
Nuxt NUXT_PUBLIC_ useRuntimeConfig().public.apiUrl
SvelteKit PUBLIC_ import { env } from '$env/static/public'

The prefix system exists as a safety mechanism. Variables without the prefix are not included in the client bundle, so your DATABASE_URL and STRIPE_SECRET_KEY are safe even if they are defined in the same .env file. But variables with the prefix are literally string-replaced into your JavaScript during the build process.

# .env for a Vite project
VITE_API_URL=https://api.myapp.com
VITE_STRIPE_PUBLISHABLE_KEY=pk_live_xxxxxxxx
VITE_GA_TRACKING_ID=G-XXXXXXXXXX

# These are NOT exposed to the frontend (no VITE_ prefix)
DATABASE_URL=postgresql://localhost:5432/myapp
STRIPE_SECRET_KEY=sk_live_xxxxxxxx
// In your Vite React app
const apiUrl = import.meta.env.VITE_API_URL;
// After build, the bundled JS literally contains:
// const apiUrl = "https://api.myapp.com";
Never Put Secrets in Frontend Environment Variables: Any value with the VITE_, NEXT_PUBLIC_, or REACT_APP_ prefix is visible to every user of your application. Anyone can open the browser DevTools, go to the Sources tab, and read every "environment variable" that was embedded in your bundle. Only use prefixed variables for truly public configuration: API base URLs, publishable keys (like Stripe's pk_ keys), analytics IDs, and feature flags. Secret keys, database URLs, and API secrets must only ever be used on the server side.

A Common Mistake: The "I'll Use Server-Side Rendering" Trap

Some developers think that using server-side rendering (SSR) with Next.js or Nuxt means their environment variables are safe because the code runs on the server. This is partially true: in a getServerSideProps function or a Next.js Server Component, you can safely access non-prefixed environment variables. But the moment you access process.env.NEXT_PUBLIC_ANYTHING in any component that is hydrated on the client, that value is in the client bundle. The prefix is what determines exposure, not where the rendering happens.

// Next.js - SAFE: server-only code
export async function getServerSideProps() {
  // This runs only on the server
  const data = await fetch(process.env.INTERNAL_API_URL, {
    headers: { Authorization: `Bearer ${process.env.API_SECRET}` }
  });
  return { props: { data: await data.json() } };
}

// Next.js - EXPOSED: this value is in the client bundle
export default function Page() {
  // NEXT_PUBLIC_ values are embedded in the JS sent to the browser
  return <div>API: {process.env.NEXT_PUBLIC_API_URL}</div>;
}

Docker and Environment Variables

Docker introduces its own layer of complexity around environment variables. There are multiple ways to pass configuration into containers, each with different security properties and appropriate use cases. Getting this wrong can bake secrets into your Docker images, where they persist in every layer of the image history for anyone with access to pull them out.

ENV vs ARG in Dockerfiles

Dockerfiles have two instructions for variables, and confusing them is a common source of security issues:

# Dockerfile

# ARG - Build-time variable. NOT available at runtime.
# Used for things like version numbers, base image tags.
ARG NODE_VERSION=20
FROM node:${NODE_VERSION}-alpine

# ARG values are visible in docker history!
# NEVER use ARG for secrets.
ARG BUILD_DATE
LABEL build_date=${BUILD_DATE}

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .

# ENV - Runtime variable. Available when the container runs.
# Sets a default that can be overridden at runtime.
ENV NODE_ENV=production
ENV PORT=3000

# Do NOT do this - the secret is baked into the image layer
# ENV DATABASE_URL=postgresql://prod:secret@db:5432/app  # WRONG!

EXPOSE ${PORT}
CMD ["node", "server.js"]
ARG Values Are Not Secret: Even though ARG variables are not available at runtime, they are stored in the image metadata and can be viewed with docker history or docker inspect. Never pass secrets as build arguments. If you need secrets during the build (for example, to access a private npm registry), use Docker BuildKit's --mount=type=secret feature, which does not persist the secret in any image layer.

Passing Variables at Runtime

The correct approach is to pass environment variables when you run the container, not when you build the image:

# Pass individual variables
docker run -e DATABASE_URL="postgresql://prod:secret@db:5432/app" \
           -e REDIS_URL="redis://cache:6379" \
           -e JWT_SECRET="your-secret-here" \
           myapp:latest

# Pass from an env file
docker run --env-file .env.production myapp:latest

# Pass from host environment (inherits the value from the host)
export DATABASE_URL="postgresql://prod:secret@db:5432/app"
docker run -e DATABASE_URL myapp:latest

Docker Compose

Docker Compose provides the most ergonomic way to manage environment variables for multi-container applications:

# docker-compose.yml
services:
  api:
    build: .
    ports:
      - "3000:3000"
    environment:
      # Inline values (visible in docker-compose.yml)
      NODE_ENV: production
      PORT: "3000"
      # Interpolate from host environment or .env file
      DATABASE_URL: ${DATABASE_URL}
      REDIS_URL: ${REDIS_URL}
      JWT_SECRET: ${JWT_SECRET}
    env_file:
      # Load additional variables from a file
      - .env.production
    depends_on:
      - postgres
      - redis

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine

volumes:
  pgdata:

Docker Compose automatically loads a .env file from the same directory as docker-compose.yml for variable interpolation (the ${VARIABLE} syntax). This is separate from the env_file directive, which loads variables directly into the container's environment. The precedence order, from highest to lowest priority, is: values set in the shell environment, values in environment: block, values from env_file:, and finally ENV defaults in the Dockerfile.

Docker Secrets (Swarm Mode)

For Docker Swarm deployments, Docker Secrets provides a more secure mechanism. Secrets are encrypted at rest, transmitted only to nodes that need them, and mounted as files in the container's filesystem rather than being exposed as environment variables:

# Create a secret
echo "postgresql://prod:supersecret@db:5432/app" | docker secret create db_url -

# Reference in docker-compose.yml (Swarm mode)
services:
  api:
    image: myapp:latest
    secrets:
      - db_url
      - jwt_secret
    environment:
      # Read secret from file instead of env var
      DATABASE_URL_FILE: /run/secrets/db_url

secrets:
  db_url:
    external: true
  jwt_secret:
    external: true

Your application then reads the secret from the file path instead of an environment variable. Many official Docker images (PostgreSQL, MySQL, Redis) support the _FILE suffix convention: if you set POSTGRES_PASSWORD_FILE=/run/secrets/db_password, the image reads the password from that file instead of the POSTGRES_PASSWORD environment variable.

Secrets Management Tools

For production systems, environment variables in .env files or CI/CD settings are often not enough. You need centralized secrets management with features like access control, audit logging, automatic rotation, and dynamic secret generation. Here is a comparison of the leading tools in this space.

Feature AWS Secrets Manager HashiCorp Vault Doppler 1Password (Developer)
Type Cloud-native (AWS) Self-hosted / Cloud SaaS SaaS
Auto Rotation Yes (Lambda-based) Yes (dynamic secrets) Manual Manual
Dynamic Secrets Limited Yes (DB creds, cloud keys) No No
Audit Logging CloudTrail Built-in Built-in Built-in
Access Control IAM Policies Policies + Auth Methods RBAC Vaults + Groups
CLI Support AWS CLI vault CLI doppler CLI op CLI
CI/CD Integration Native AWS Plugins for most CI Native integrations GitHub Actions, etc.
Pricing $0.40/secret/month Free (OSS) / Enterprise Free tier / $4+/user/mo $4+/user/mo
Best For AWS-heavy stacks Multi-cloud, enterprise Developer experience Small teams, shared secrets

AWS Secrets Manager

If your infrastructure is on AWS, Secrets Manager is the natural choice. It integrates tightly with IAM for access control, supports automatic rotation through Lambda functions, and can be accessed directly from ECS tasks, Lambda functions, and EC2 instances without managing credentials to access the secrets themselves (using IAM roles).

// Reading a secret from AWS Secrets Manager
import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';

const client = new SecretsManagerClient({ region: 'us-east-1' });

async function getSecret(secretName) {
  const command = new GetSecretValueCommand({ SecretId: secretName });
  const response = await client.send(command);
  return JSON.parse(response.SecretString);
}

// Usage
const dbCreds = await getSecret('prod/myapp/database');
// { host: "db.example.com", port: 5432, username: "app", password: "..." }

HashiCorp Vault

Vault is the most powerful option, particularly for multi-cloud or hybrid environments. Its killer feature is dynamic secrets: instead of storing a static database password, Vault generates a unique, short-lived database credential for each request. When the credential expires, Vault automatically revokes it. This means there are no long-lived credentials to steal.

# Authenticate with Vault
vault login -method=token token=s.xxxxxxxxxxxxxxxx

# Read a static secret
vault kv get -format=json secret/myapp/config

# Generate a dynamic database credential
vault read database/creds/myapp-readonly
# Key                Value
# ---                -----
# lease_id           database/creds/myapp-readonly/abc123
# lease_duration     1h
# username           v-token-myapp-readonly-xyz789
# password           A1b2C3d4E5f6G7h8

Doppler

Doppler focuses on developer experience. It replaces .env files entirely with a centralized dashboard where you manage secrets for every environment (development, staging, production). The CLI injects secrets into your process at runtime, so your application code does not need to know where secrets come from.

# Install and configure
doppler setup

# Run your app with secrets injected
doppler run -- node server.js

# The above is equivalent to setting every secret as an env var
# Your code still uses process.env.DATABASE_URL as normal

# View secrets for a specific environment
doppler secrets --config production
My Recommendation: For teams under 10 developers working on a single cloud provider, use the provider's native secrets manager (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault). For larger organizations or multi-cloud setups, HashiCorp Vault gives you the most flexibility and the strongest security model. For startups that want the fastest setup, Doppler's developer experience is unmatched. And for solo developers or small teams already using 1Password, its CLI and developer tools provide a surprisingly capable secrets management workflow without adding another service.

CI/CD Secrets

CI/CD pipelines need access to secrets for deployments, running integration tests, publishing packages, and interacting with cloud providers. Every major CI/CD platform provides a mechanism for storing and injecting secrets, but the implementation details vary and the security implications are significant.

GitHub Actions Secrets

GitHub Actions stores secrets at the repository or organization level. Secrets are encrypted, never exposed in logs (GitHub automatically redacts them), and are available as environment variables in workflow steps.

# .github/workflows/deploy.yml
name: Deploy to Production

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run tests
        env:
          DATABASE_URL: ${{ secrets.TEST_DATABASE_URL }}
          STRIPE_SECRET_KEY: ${{ secrets.STRIPE_TEST_KEY }}
        run: npm test

      - name: Deploy to production
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: us-east-1
        run: |
          npm run build
          aws s3 sync dist/ s3://my-bucket/
          aws cloudfront create-invalidation --distribution-id ${{ secrets.CF_DIST_ID }} --paths "/*"
GitHub Actions Security: Secrets are not available to workflows triggered by pull requests from forks. This is a deliberate security measure — if secrets were available, anyone could fork your repository, modify the workflow to print the secrets, and open a pull request. However, be aware that the pull_request_target event does have access to secrets and runs in the context of the base repository. Using pull_request_target with untrusted code is a known attack vector. Prefer OIDC-based authentication (like AWS's configure-aws-credentials action with a role) over storing long-lived access keys as secrets.

GitLab CI/CD Variables

GitLab provides CI/CD variables at the project, group, or instance level. Variables can be marked as "protected" (only available in protected branches) and "masked" (redacted in logs).

# .gitlab-ci.yml
stages:
  - test
  - deploy

test:
  stage: test
  script:
    - npm ci
    - npm test
  variables:
    DATABASE_URL: $TEST_DATABASE_URL
    NODE_ENV: test

deploy_production:
  stage: deploy
  script:
    - npm run build
    - npx serverless deploy --stage production
  variables:
    AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  only:
    - main
  environment:
    name: production

Vercel Environment Variables

Vercel allows you to define environment variables per environment (Production, Preview, Development) through the dashboard or CLI. This is particularly useful because Preview deployments (created for pull requests) can have different API keys and database URLs than production.

# Add a secret via CLI
vercel env add DATABASE_URL production
# Prompts for the value

# Pull environment variables for local development
vercel env pull .env.local

# List all environment variables
vercel env ls

Vercel automatically makes environment variables available as process.env values in your serverless functions and during the build process. For Next.js projects, variables prefixed with NEXT_PUBLIC_ are embedded in the client bundle as described in the frontend section above.

OIDC: The Future of CI/CD Authentication

The modern best practice is to eliminate long-lived credentials from CI/CD entirely. Instead of storing AWS access keys as secrets, use OIDC (OpenID Connect) tokens. GitHub Actions, GitLab CI, and CircleCI all support OIDC, which allows your CI/CD pipeline to assume a cloud provider role directly, without any stored secrets.

# GitHub Actions with OIDC - no stored AWS credentials!
jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write    # Required for OIDC
      contents: read
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials via OIDC
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsDeployRole
          aws-region: us-east-1
          # No access key or secret key needed!

      - name: Deploy
        run: aws s3 sync dist/ s3://my-bucket/

OIDC is better because there are no long-lived credentials to rotate or leak, permissions are scoped to specific repositories and branches through the IAM role's trust policy, and every authentication event is logged by both the CI/CD platform and the cloud provider.

The .gitignore Essentials

Your .gitignore file is your last line of defense against accidentally committing secrets to version control. It is not a security tool — it is a safety net. Even with a properly configured .gitignore, you should still use pre-commit hooks and secret scanning tools. But a comprehensive .gitignore catches the most common mistakes before they happen.

Here is the essential list of patterns every project should include for secrets and configuration protection:

# .gitignore - Environment and Secrets

# Environment files (NEVER commit these)
.env
.env.local
.env.*.local
.env.development.local
.env.test.local
.env.production
.env.production.local
.env.staging

# Keep the example file (this SHOULD be committed)
!.env.example

# Key files and certificates
*.pem
*.key
*.cert
*.p12
*.pfx
*.jks

# AWS
.aws/credentials
aws-credentials.json

# Google Cloud
*-service-account.json
gcloud-credentials.json
.gcp-credentials/

# Terraform state (contains secrets in plaintext)
*.tfstate
*.tfstate.backup
.terraform/

# Docker environment files
docker-compose.override.yml
.docker-env

# IDE files that might contain run configurations with secrets
.idea/workspace.xml
.vscode/launch.json

# OS files
.DS_Store
Thumbs.db

# Dependency directories
node_modules/
vendor/
.venv/
What If a Secret Is Already Committed? Adding a file to .gitignore does not remove it from Git history. If you have already committed a secret, you must: (1) immediately rotate the compromised credential, (2) use git filter-repo or BFG Repo Cleaner to remove the file from all history, and (3) force-push the cleaned history to the remote. However, if the repository was ever public or if anyone has cloned it, assume the secret is compromised regardless. Rotation is the only real fix.

Pre-Commit Hooks for Secret Detection

Use tools that automatically scan commits for potential secrets before they leave your machine:

# Install gitleaks (popular secret scanner)
brew install gitleaks

# Scan the entire repository history
gitleaks detect --source . --verbose

# Set up as a pre-commit hook
# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.18.0
    hooks:
      - id: gitleaks
# Alternative: use detect-secrets by Yelp
pip install detect-secrets

# Create a baseline of existing secrets (for legacy projects)
detect-secrets scan > .secrets.baseline

# Add as a pre-commit hook
detect-secrets-hook --baseline .secrets.baseline

GitHub also provides secret scanning for public repositories (and for private repositories on Enterprise plans) that automatically detects committed secrets from over 200 service providers and notifies both the developer and the service provider. Enable it in your repository settings under "Code security and analysis." This is a second layer of defense — it catches secrets that slip past your pre-commit hooks.

Security Best Practices

Managing secrets securely is not a one-time setup. It is an ongoing discipline that requires consistent practices across your entire development lifecycle. Here are the practices that matter most, drawn from years of incident response and security reviews.

1. Secret Rotation

Every secret should have a defined rotation schedule. The longer a secret exists, the more likely it has been exposed through a log file, a debugging session, an old backup, or a developer's laptop. Rotation limits the blast radius of any single exposure.

# Automate rotation reminders with a simple cron job
# This script checks secret ages and sends alerts
#!/bin/bash
SECRETS=$(vault kv list -format=json secret/production/)
for secret in $(echo $SECRETS | jq -r '.[]'); do
  METADATA=$(vault kv metadata get -format=json "secret/production/$secret")
  CREATED=$(echo $METADATA | jq -r '.data.created_time')
  AGE_DAYS=$(( ($(date +%s) - $(date -d "$CREATED" +%s)) / 86400 ))
  if [ $AGE_DAYS -gt 80 ]; then
    echo "WARNING: secret/production/$secret is $AGE_DAYS days old (rotation recommended at 90)"
  fi
done

2. Principle of Least Privilege

Every service, every developer, and every CI/CD pipeline should have access to only the secrets it actually needs. A frontend deployment pipeline does not need the database password. The email service does not need the payment gateway keys. Broad access is the default in most organizations, and it dramatically increases the impact of any compromise.

3. Audit Logging

Every access to a secret should be logged. When a breach occurs (and eventually one will), audit logs are the difference between understanding what happened in hours versus weeks. Effective audit logging answers four questions: who accessed what secret, when, and from where.

// Example: wrapping secret access with audit logging
async function getSecret(secretName, context) {
  const startTime = Date.now();

  try {
    const secret = await secretsManager.getSecretValue(secretName);

    // Log the access (NOT the secret value!)
    logger.info('Secret accessed', {
      secretName,
      requestedBy: context.serviceId,
      sourceIp: context.ip,
      duration: Date.now() - startTime,
      timestamp: new Date().toISOString(),
    });

    return secret;
  } catch (error) {
    logger.error('Secret access failed', {
      secretName,
      requestedBy: context.serviceId,
      error: error.message,
      timestamp: new Date().toISOString(),
    });
    throw error;
  }
}

4. Never Log Secrets

This seems obvious, but it is one of the most common ways secrets end up exposed. A debug log statement that prints the entire request object, a verbose error handler that includes the database connection string in the error message, a middleware that logs all environment variables at startup — these are real patterns that lead to secrets appearing in plaintext in log files, which are often stored with less security than the secrets themselves.

// DANGEROUS - common patterns that accidentally log secrets
console.log('Config:', process.env);                    // Logs ALL env vars
console.log('DB connection:', connectionString);        // Logs the full URL with password
console.log('Request:', JSON.stringify(req.headers));    // May include Authorization header
logger.error('API call failed', { error, config });     // Config might contain API keys

// SAFE - redact sensitive values
function redactConfig(config) {
  const redacted = { ...config };
  const sensitiveKeys = ['password', 'secret', 'key', 'token', 'authorization'];
  for (const key of Object.keys(redacted)) {
    if (sensitiveKeys.some(s => key.toLowerCase().includes(s))) {
      redacted[key] = '[REDACTED]';
    }
  }
  return redacted;
}

logger.info('App started', { config: redactConfig(appConfig) });

5. Encrypt at Rest and in Transit

Secrets should be encrypted wherever they are stored. This includes your secrets manager (handled automatically), your CI/CD platform (handled automatically), your backup systems (often overlooked), and any configuration files on disk. In transit, all secret transmission should happen over TLS. Never transmit secrets over HTTP, email, Slack, or any unencrypted channel.

6. Use Short-Lived Credentials Where Possible

The best secret is one that expires before it can be exploited. Prefer short-lived tokens over permanent API keys. Use Vault dynamic secrets for database access. Use AWS STS temporary credentials instead of IAM access keys. Use OIDC for CI/CD authentication. The shorter the credential's lifespan, the smaller the window for exploitation.

The Secret Management Maturity Model: Level 1 — Secrets in .env files, manual management. Level 2 — CI/CD secrets with .gitignore and pre-commit hooks. Level 3 — Centralized secrets manager with audit logging. Level 4 — Dynamic secrets with automatic rotation and zero-standing credentials. Most teams operate at Level 1 or 2. Getting to Level 3 should be your near-term goal. Level 4 is the gold standard for organizations with serious compliance requirements.

Conclusion

Environment variables and secrets management sit at the intersection of developer experience and security. Get it right, and your team moves fast with confidence that credentials are safe. Get it wrong, and you are one accidental git push away from a breach that costs orders of magnitude more to fix than it would have cost to prevent.

The core principles are straightforward. Keep configuration out of code. Never commit secrets to version control. Validate your environment at startup. Understand that frontend environment variables are public. Use the right tool for the right context: .env files for local development, CI/CD secrets for pipelines, and a proper secrets manager for production. Rotate credentials on a schedule. Apply least privilege everywhere. Log access, not values.

If you are starting from scratch, here is the minimum viable setup: create a .env.example file documenting every variable your application needs, add .env to .gitignore, validate your configuration with Zod at startup, install gitleaks as a pre-commit hook, and store production secrets in your cloud provider's secrets manager. That combination covers 90% of the risk surface and takes less than an hour to implement.

For teams that are further along, the next step is eliminating long-lived credentials. Migrate your CI/CD pipelines to OIDC. Evaluate HashiCorp Vault or Doppler for centralized secrets management. Implement automatic rotation for database credentials and API keys. Set up audit logging so you can answer "who accessed what, when?" within minutes, not days.

The $50,000 incident I described at the beginning of this guide was entirely preventable. A properly configured .gitignore, a pre-commit hook, or even a quick git diff review before pushing would have caught it. The tools and practices in this guide exist precisely to make that kind of mistake structurally impossible rather than relying on individual vigilance. Build the systems, enforce the guardrails, and treat secrets with the seriousness they deserve. Your future self — and your organization's security team — will thank you.

Need to encode API keys or tokens for configuration files or HTTP headers? Try our free browser-based tool.

Base64 Encoder Tool