SQLite to PostgreSQL Migration: A Decentralized Platform's Database Journey
SQLite to PostgreSQL Migration: A Decentralized Platform's Database Journey
SQLite is perfect for development. PostgreSQL is what you need for production. Here's how we migrated ClawDUX without losing data.
Why We Switched
| Feature | SQLite | PostgreSQL |
|---|---|---|
| Concurrent writes | Single writer lock | MVCC (many writers) |
| Connection pooling | N/A | PgBouncer support |
| Full-text search | Basic | Excellent (tsvector) |
| JSON operations | Limited | JSONB with indexing |
| Docker-friendly | File-based (volume) | Service (container) |
The Prisma Approach
Our schema stays the same. Only the provider changes:
// Development (schema.prisma)
datasource db {
provider = "sqlite"
url = env("DATABASE_URL")
}
// Production (modified by Dockerfile)
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
Docker Configuration
# docker-compose.yml
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: clawdux
POSTGRES_USER: clawdux
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U clawdux"]
interval: 5s
timeout: 5s
retries: 5
backend:
build: ./backend
environment:
DATABASE_URL: postgresql://clawdux:${DB_PASSWORD}@postgres:5432/clawdux
depends_on:
postgres:
condition: service_healthy
The Provider Switch in Dockerfile
# backend/Dockerfile
FROM node:20-slim
WORKDIR /app/backend
COPY . .
# Switch SQLite → PostgreSQL
RUN sed -i 's/provider = "sqlite"/provider = "postgresql"/' \
prisma/schema.prisma
# Generate Prisma client for PostgreSQL
RUN npx prisma generate
# Build TypeScript
RUN npm run build
# On startup: push schema + start server
CMD npx prisma db push && node dist/index.js
Data Migration Script
import sqlite3
import psycopg2
import json
def migrate_table(table: str, sqlite_conn, pg_conn):
"""Migrate a single table from SQLite to PostgreSQL."""
cursor = sqlite_conn.cursor()
cursor.execute(f"SELECT * FROM {table}")
rows = cursor.fetchall()
if not rows:
return 0
# Get column names
columns = [desc[0] for desc in cursor.description]
placeholders = ', '.join(['%s'] * len(columns))
col_names = ', '.join(f'"{c}"' for c in columns)
pg_cursor = pg_conn.cursor()
for row in rows:
# Handle SQLite → PG type differences
converted = []
for val in row:
if isinstance(val, bytes):
val = val.decode('utf-8')
converted.append(val)
try:
pg_cursor.execute(
f'INSERT INTO "{table}" ({col_names}) '
f'VALUES ({placeholders}) '
f'ON CONFLICT DO NOTHING',
converted
)
except Exception as e:
print(f"Error migrating {table} row: {e}")
pg_conn.commit()
return len(rows)
# Usage
# sqlite_conn = sqlite3.connect('dev.db')
# pg_conn = psycopg2.connect('postgresql://...')
# for table in ['User', 'Listing', 'Order', 'Offer']:
# count = migrate_table(table, sqlite_conn, pg_conn)
# print(f"Migrated {count} rows from {table}")
Production Gotchas
- BigInt handling: SQLite stores as INTEGER, PostgreSQL as BIGINT — Prisma handles this transparently
- Boolean values: SQLite uses 0/1, PostgreSQL uses true/false — Prisma handles this too
- DateTime: SQLite stores as TEXT, PostgreSQL as TIMESTAMP — make sure your seed data uses ISO 8601
- Case sensitivity: PostgreSQL table names are case-sensitive in quotes — Prisma quotes everything
This migration path is exactly what ClawDUX uses in production — SQLite for fast local development, PostgreSQL in Docker for deployment.
The core logic discussed in this article has been integrated into the ClawDUX API. Access ClawDUX-core for full permissions, or browse the marketplace to discover verified trading strategies.