Skip to content

Examples

Copy-pasteable recipes for the patterns people actually hit: local dev, CI, test suites, bulk loads, deterministic fixtures. Every snippet targets the current @otg-dev/seedforge API.

Intent: Migrations applied, empty DB, you want 10k FK-correct rows in one command.

Terminal window
seedforge --db postgres://localhost/mydb --count 10000 --fast --seed 42

That’s it. Seedforge introspects the live schema, topologically orders tables by FK dependency, and inserts via COPY. --seed 42 makes the output deterministic so two runs produce byte-identical data.

Intent: Spin up Postgres, run migrations, seed, test — on every PR.

.github/workflows/preview.yml
name: PR Preview
on: pull_request
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: testdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports: ['5432:5432']
options: >-
--health-cmd pg_isready --health-interval 10s
--health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: 20, cache: npm }
- run: npm ci
- run: npm run db:migrate
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/testdb
- name: Seed test data
run: npx @otg-dev/seedforge --db $DATABASE_URL --count 100 --seed 42 --quiet --yes
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/testdb
- run: npm test
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/testdb

Intent: Each test gets fresh, isolated seeded data that rolls back automatically.

withSeed() opens a transaction on first use and rolls it back on teardown(). Scope it per-suite or per-test.

tests/orders.test.ts
import { afterEach, beforeEach, describe, expect, it } from 'vitest'
import { withSeed } from '@otg-dev/seedforge'
describe('order service', () => {
let ctx: ReturnType<typeof withSeed>
beforeEach(() => {
ctx = withSeed(process.env.TEST_DATABASE_URL!, {
seed: 42,
transaction: true,
})
})
afterEach(async () => {
await ctx.teardown()
})
it('calculates totals across seeded orders', async () => {
await ctx.seed('users', 5)
await ctx.seed('products', 10)
const orders = await ctx.seed('orders', 20)
for (const order of orders) {
const total = await orderService.calculateTotal(order.id)
expect(total).toBeGreaterThan(0)
}
})
})

4. Prisma schema → SQL file (no live DB)

Section titled “4. Prisma schema → SQL file (no live DB)”

Intent: Generate a seed file offline so CI can cache it or apply it before the DB is reachable.

Terminal window
seedforge --prisma ./prisma/schema.prisma --output seed.sql --count 100 --seed 42

Then apply whenever the DB is up:

Terminal window
psql "$DATABASE_URL" < seed.sql

Same pattern works for Drizzle (--drizzle ./src/db/schema.ts), TypeORM (--typeorm ./src/entities), and JPA (--jpa ./src/main/java). No database connection required at generation time.

Intent: E2E tests assert on specific emails, usernames, product names. Fixed seed = stable fixtures.

Terminal window
seedforge --db postgres://localhost/e2e --count 25 --seed 1337 --yes
e2e/auth.spec.ts
import { expect, test } from '@playwright/test'
test('login flow', async ({ page }) => {
await page.goto('/login')
await page.fill('[name=email]', 'laurie59@example.com')
await page.fill('[name=password]', 'password')
await page.click('button[type=submit]')
await expect(page).toHaveURL('/dashboard')
})

--seed 1337 guarantees row 0 of users is always the same fake identity. Change the seed number, regenerate your assertions, commit both.

Intent: Load-test a query, stress a migration, measure index build time.

Terminal window
seedforge --db postgres://localhost/bench --count 1000000 --fast --seed 42

Programmatic equivalent with tuned batching:

import { seed } from '@otg-dev/seedforge'
const result = await seed('postgres://localhost/bench', {
count: 1_000_000,
fast: true,
seed: 42,
batchSize: 1000,
})
console.log(`${result.rowCount} rows in ${result.duration}ms`)

Seedforge streams generation table-by-table and flushes each batch before allocating the next, so peak memory stays bounded regardless of count. For truly enormous loads where the DB itself is the bottleneck, pipe to a file instead:

Terminal window
seedforge --db postgres://localhost/bench --count 5000000 --output bulk.sql --seed 42
psql bench < bulk.sql

Intent: Most users should be user, a handful admin, the occasional moderator — not a uniform 33/33/33.

.seedforge.yml
tables:
users:
columns:
role:
values: [admin, user, moderator]
weights: [0.05, 0.9, 0.05]
subscription:
values: [free, basic, premium, enterprise]
weights: [0.6, 0.25, 0.1, 0.05]

Or inline via the programmatic API:

import { createSeeder } from '@otg-dev/seedforge'
const seeder = await createSeeder('postgres://localhost/mydb', { seed: 42 })
await seeder.seed('users', 1000, {
columns: {
role: {
values: ['admin', 'user', 'moderator'],
weights: [0.05, 0.9, 0.05],
},
},
})
await seeder.teardown()

Weights don’t have to sum to 1 — seedforge normalizes them.

Intent: Most users have 1 order, a few power users have many. Long-tail, not flat.

.seedforge.yml
tables:
orders:
relationships:
user_id:
cardinality: "1..10"
distribution: zipf # heavy head, long tail
order_items:
relationships:
order_id:
cardinality: "1..5"
distribution: normal # most orders have 2-3 items
comments:
relationships:
post_id:
cardinality: "0..20"
distribution: zipf # viral posts dominate

Programmatic form:

await seeder.seed('orders', 500, {
relationship: {
user_id: {
cardinality: { min: 1, max: 10, distribution: 'zipf' },
},
},
})

Available distributions: uniform (default), zipf (long-tail), normal (bell curve).

Intent: Skip migration tracking tables, audit logs, anything you don’t want seedforge to touch.

Flag form (quick one-off):

Terminal window
seedforge --db $DATABASE_URL --exclude schema_migrations _prisma_migrations audit_log

Config form with glob patterns (preferred for repeat runs):

.seedforge.yml
exclude:
- schema_migrations
- _prisma_migrations
- "pg_*"
- "audit_*"

Intent: One config file, many environments. Staging, local, CI — same .seedforge.yml.

.seedforge.yml
connection:
url: ${DATABASE_URL}
schema: ${DB_SCHEMA:-public}
count: 100
seed: 42

Then in CI:

Terminal window
DATABASE_URL=postgres://postgres:postgres@localhost:5432/testdb \
seedforge --config .seedforge.yml

Any ${VAR} in the config gets resolved at load time. Missing variables error out early rather than silently producing a broken connection string.


Missing a recipe? Open an issue — the examples that ship with the repo live in examples/.