migrations Rules

22 rules found for migrations

Do not rename columns

When renaming columns in PostgreSQL, follow a safe migration pattern to avoid breaking changes to applications: 1. Create a new column 2. Update application code to write to both the old and new columns 3. Backfill data from the old column to the new column 4. Update application code to read from the new column instead of the old one 5. Once all deployments are complete, stop writing to the old column 6. Drop the old column in a later migration Bad: `sql ALTER TABLE users RENAME COLUMN some_column TO new_name; `

postgresql

prisma

supabase

mysql

drizzle

migrations

Do not rename tables

When renaming tables, use a multi-step approach instead of direct renaming to prevent downtime. 1. Create a new table 2. Write to both tables 3. Backfill data from the old table to the new table 4. Move reads from the old table to the new table 5. Stop writing to the old table 6. Drop the old table Bad: `sql ALTER TABLE users RENAME TO customers; `

postgresql

prisma

supabase

mysql

drizzle

migrations

Add indexes for foreign keys in PostgreSQL

When adding a foreign key constraint in PostgreSQL, always add a corresponding index. Bad: `sql ALTER TABLE orders ADD CONSTRAINT fk_orders_user_id FOREIGN KEY (user_id) REFERENCES users(id); ` Good: `sql ALTER TABLE orders ADD CONSTRAINT fk_orders_user_id FOREIGN KEY (user_id) REFERENCES users(id); CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders (user_id); `

postgresql

prisma

supabase

drizzle

migrations

Always have id, created_at, updated_at columns in PostgreSQL

All CREATE TABLE statements must include id, created_at, and updated_at columns. Bad: `sql CREATE TABLE users ( email VARCHAR(255) NOT NULL UNIQUE, name VARCHAR(100) NOT NULL ); ` Good: `sql CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) NOT NULL UNIQUE, name VARCHAR(100) NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() NOT NULL, updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() NOT NULL ); `

postgresql

prisma

supabase

drizzle

migrations

Column naming standards in PostgreSQL

Maintain consistent naming conventions for new columns. Bad `sql -- Bad: Column named uuid instead of id ALTER TABLE users ADD COLUMN uuid UUID PRIMARY KEY DEFAULT gen_random_uuid(); ` `sql -- Bad: Column named camelCase instead of snake_case ALTER TABLE users ADD COLUMN firstName VARCHAR(255); ` `sql -- Bad: Column foreign key ends with uuid instead of id ALTER TABLE orders ADD COLUMN user_uuid UUID REFERENCES users(id); ` Good `sql -- Good: Consistent naming ALTER TABLE users ADD COLUMN id UUID PRIMARY KEY DEFAULT gen_random_uuid(); ALTER TABLE users ADD COLUMN first_name VARCHAR(255); ALTER TABLE orders ADD COLUMN user_id UUID REFERENCES users(uuid); `

postgresql

prisma

supabase

drizzle

migrations

Index naming standards in PostgreSQL

For indexes, use the following naming standards: - idx_tablename_columnname for single column indexes, - idx_tablename_col1_col2 for multi-column indexes Bad `sql -- Bad: Inconsistent and unclear index names CREATE INDEX email_index ON users(email); CREATE INDEX user_orders ON orders(user_id); CREATE INDEX idx_prod_cat_stat ON products(category, status); ` Good `sql -- Good: Consistent index naming CREATE INDEX idx_users_email ON users(email); CREATE INDEX idx_orders_user_id ON orders(user_id); CREATE INDEX idx_products_category_status ON products(category, status); `

postgresql

prisma

supabase

drizzle

migrations

Limit non unique indexes in PostgreSQL

Limit non-unique indexes to a maximum of three columns in PostgreSQL databases: Bad: `sql CREATE INDEX index_users_on_multiple_columns ON users (column_a, column_b, column_c, column_d); ` Good: `sql CREATE INDEX CONCURRENTLY index_users_on_selective_columns ON users (column_d, column_b); `

postgresql

prisma

supabase

drizzle

migrations

Only concurrent indexes in PostgreSQL

When creating indexes in PostgreSQL, always use the CONCURRENTLY option to prevent blocking writes during index creation. Bad: `sql CREATE INDEX idx_users_email ON users(email); ` Good: `sql CREATE INDEX CONCURRENTLY idx_users_email ON users(email); `

postgresql

prisma

supabase

drizzle

migrations

Always use JSONB in PostgreSQL

Always use jsonb instead of json data type when creating columns in PostgreSQL databases. Bad: `sql ALTER TABLE users ADD COLUMN properties json; ` Good: `sql ALTER TABLE users ADD COLUMN properties jsonb; `

postgresql

prisma

supabase

drizzle

migrations

Use check constraints for setting NOT NULL columns in PostgreSQL

When adding a NOT NULL constraint to an existing column in PostgreSQL, use a check constraint first to avoid blocking reads and writes while every row is checked. Bad: `sql -- This can cause performance issues with large tables ALTER TABLE users ALTER COLUMN some_column SET NOT NULL; ` Good: `sql -- Step 1: Add a check constraint without validation ALTER TABLE users ADD CONSTRAINT users_some_column_null CHECK (some_column IS NOT NULL) NOT VALID; -- Step 2: In a separate transaction, validate the constraint ALTER TABLE users VALIDATE CONSTRAINT users_some_column_null; -- Step 3: Add the NOT NULL constraint and remove the check constraint ALTER TABLE users ALTER COLUMN some_column SET NOT NULL; ALTER TABLE users DROP CONSTRAINT users_some_column_null; `

postgresql

prisma

supabase

drizzle

migrations

Split foreign keys in PostgreSQL

When adding foreign keys in Postgres migrations, split the operation into two steps to avoid blocking writes on both tables: 1. First create the foreign key constraint without validation 2. Then validate existing data in a separate migration Bad: `sql -- In a single migration ALTER TABLE users ADD CONSTRAINT fk_users_orders FOREIGN KEY (order_id) REFERENCES orders (id); ` Good: `sql -- In first migration: add without validating ALTER TABLE users ADD CONSTRAINT fk_users_orders FOREIGN KEY (order_id) REFERENCES orders (id) NOT VALID; -- In second migration: validate existing data ALTER TABLE users VALIDATE CONSTRAINT fk_users_orders; `

postgresql

prisma

supabase

drizzle

migrations

Split unique constraints in PostgreSQL

When adding unique constraints in PostgreSQL, create the unique index concurrently first before adding the constraint to avoid blocking reads and writes. Bad: `sql -- Creates a unique constraint directly, which blocks reads and writes ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE (email); ` Good: `sql -- First create a unique index concurrently (non-blocking) CREATE UNIQUE INDEX CONCURRENTLY users_email_unique_idx ON users (email); -- Then add the constraint using the existing index ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE USING INDEX users_email_unique_idx; `

postgresql

prisma

supabase

drizzle

migrations

Ensure removed column is ignored in Prisma

When creating a migration to remove a column from the database, ensure that the schema has the @ignore attribute on that column in the Prisma schema. Search for the .prisma schema in the codebase to verify this. `sql ALTER TABLE "User" DROP COLUMN "createdAt"; ALTER TABLE "User" DROP COLUMN "updatedAt"; ` `prisma model User { id Int @id @default(autoincrement()) email String @unique password String remarks String? createdAt DateTime @default(now()) @ignore updatedAt DateTime @updatedAt @ignore } `

postgresql

prisma

migrations

Split check constraints

When adding check constraints in migrations, split the operation into two steps to avoid blocking writes during the table scan: 1. First create the check constraint without validation 2. Then validate existing data in a separate migration Bad: `sql -- In a single migration ALTER TABLE users ADD CONSTRAINT ck_users_age_positive CHECK (age >= 0); ` Good: `sql -- In first migration: add without validating ALTER TABLE users ADD CONSTRAINT ck_users_age_positive CHECK (age >= 0) NOT VALID; -- In second migration: validate existing data ALTER TABLE users VALIDATE CONSTRAINT ck_users_age_positive; `

postgresql

prisma

supabase

mysql

drizzle

migrations

Change column types safely in SQLAlchemy

When changing a column type that requires a table rewrite, follow these steps: 1. Create a new column with the desired type 2. Write to both columns during the transition period 3. Backfill data from the old column to the new column 4. Move reads from the old column to the new column 5. Stop writing to the old column 6. Drop the old column Bad: `python def upgrade(): # Directly changing a column type can cause table locks op.alter_column('users', 'some_column', type_=sa.String(50), existing_type=sa.Integer()) def downgrade(): op.alter_column('users', 'some_column', type_=sa.Integer(), existing_type=sa.String(50)) ` Good: `python Migration 1: Add new column def upgrade(): # Adding a new column first op.add_column('users', sa.Column('some_column_new', sa.String(50))) def downgrade(): op.drop_column('users', 'some_column_new') ` `python Migration 2: Complete the transition (after backfilling data) def upgrade(): # After ensuring all data is migrated op.drop_column('users', 'some_column') op.alter_column('users', 'some_column_new', new_column_name='some_column') def downgrade(): op.alter_column('users', 'some_column', new_column_name='some_column_new') op.add_column('users', sa.Column('some_column', sa.Integer())) `

postgresql

mysql

sqlalchemy

migrations

alembic

Ensure index is not already covered in SQLAlchemy

Ensure that individual column indexes in SQLAlchemy are not covered by existing composite indexes. Bad: `python class User(Base): __tablename__ = "users" __table_args__ = ( Index("email_username_idx", "email", "username"), ) id: Mapped[int] = mapped_column(primary_key=True) email: Mapped[str] = mapped_column(String(255), index=True) # covered by email_username_idx username: Mapped[str] = mapped_column(String(100)) status: Mapped[str] = mapped_column(String(20)) ` Good: `python class User(Base): __tablename__ = "users" # Only composite index needed - covers both email and email+username queries __table_args__ = ( Index("email_username_idx", "email", "username"), ) id: Mapped[int] = mapped_column(primary_key=True) email: Mapped[str] = mapped_column(String(255)) username: Mapped[str] = mapped_column(String(100)) status: Mapped[str] = mapped_column(String(20)) `

postgresql

sqlalchemy

alembic

migrations

Limit non unique indexes in SQLAlchemy

Limit non-unique indexes to a maximum of three columns in PostgreSQL databases: Bad: `python def upgrade(): with op.get_context().autocommit_block(): op.create_index( 'index_users_on_multiple_columns', 'users', ['column_a', 'column_b', 'column_c', 'column_d'], postgresql_concurrently=True ) ` Good: `python def upgrade(): # Limit to most selective columns for better performance with op.get_context().autocommit_block(): op.create_index( 'index_users_on_selective_columns', 'users', ['column_d', 'column_b'], postgresql_concurrently=True ) `

postgresql

sqlalchemy

alembic

migrations

Only concurrent indexes in SQLAlchemy

When creating or dropping indexes in PostgreSQL using SQLAlchemy migrations, always use the postgresql_concurrently=True option within an autocommit block. This prevents blocking writes during index operations. For upgrade(): Bad: `python def upgrade(): op.create_index('idx_users_email', 'users', ['email']) ` Good: `python def upgrade(): with op.get_context().autocommit_block(): op.create_index('idx_users_email', 'users', ['email'], postgresql_concurrently=True) ` For downgrade(): Bad: `python def downgrade(): op.drop_index('idx_users_email', 'users') ` Good: `python def downgrade(): with op.get_context().autocommit_block(): op.drop_index('idx_users_email', 'users', postgresql_concurrently=True) `

postgresql

sqlalchemy

migrations

alembic

Add check constraints safely in SQLAlchemy

When adding check constraints that could affect large tables, create the constraint with NOT VALID first to avoid blocking writes during the validation scan. Bad: `python def upgrade(): # Directly creating a check constraint blocks writes during table scan op.create_check_constraint( 'ck_users_age_positive', 'users', 'age >= 0' ) ` Good: `python Migration 1: Create check constraint without validation def upgrade(): # Create the check constraint without validating existing data (non-blocking) op.create_check_constraint( 'ck_users_age_positive', 'users', 'age >= 0', postgresql_not_valid=True ) ` `python Migration 2: Validate existing data def upgrade(): op.execute('ALTER TABLE users VALIDATE CONSTRAINT ck_users_age_positive') `

postgresql

sqlalchemy

alembic

migrations

Add foreign keys safely in SQLAlchemy

When adding foreign keys in SQLAlchemy migrations, split the operation into two steps to avoid blocking writes on both tables: 1. First create the foreign key constraint without validation 2. Then validate existing data in a separate migration Bad: `python def upgrade(): # Directly creating a foreign key constraint can block writes on both tables op.create_foreign_key( 'fk_users_orders', 'users', 'orders', ['order_id'], ['id'] ) ` Good: `python Migration 1: Add foreign key without validation def upgrade(): # Create the foreign key constraint without validating existing data op.create_foreign_key( 'fk_users_orders', 'users', 'orders', ['order_id'], ['id'], postgresql_not_valid=True ) ` `python Migration 2: Validate existing data def upgrade(): op.execute('ALTER TABLE users VALIDATE CONSTRAINT fk_users_orders') `

postgresql

sqlalchemy

alembic

migrations

Add unique constraints safely in SQLAlchemy

When adding unique constraints that could affect large tables, create the unique index concurrently first to avoid blocking reads and writes during the migration. Bad: `python def upgrade(): # Directly creating a unique constraint can block reads and writes op.create_unique_constraint('users_email_unique', 'users', ['email']) ` Good: `python Migration 1: Create unique index concurrently def upgrade(): # Create the unique index concurrently (non-blocking) op.create_index( 'users_email_unique_idx', 'users', ['email'], unique=True, postgresql_concurrently=True ) ` `python Migration 2: Add constraint using existing index def upgrade(): # Add the unique constraint using the existing index op.create_unique_constraint( 'users_email_unique', 'users', ['email'], postgresql_using_index='users_email_unique_idx' ) `

postgresql

sqlalchemy

alembic

migrations

Verify query patterns are covered by an index in SQLAlchemy

Ensure that SQLAlchemy query patterns are covered by appropriate database indexes. Bad: `python queries.py def get_user_by_email(session: Session, email: str): # Will perform full table scan - no index on email return session.scalar(select(User).where(User.email == email)) models.py class User(Base): __tablename__ = 'users' id: Mapped[int] = mapped_column(primary_key=True) email: Mapped[str] = mapped_column(String(255)) # No index status: Mapped[str] = mapped_column(String(50)) created_at: Mapped[datetime] = mapped_column(DateTime) ` Good: `python queries.py def get_user_by_email(session: Session, email: str): # Is covered by an index return session.scalar(select(User).where(User.email == email)) models.py class User(Base): __tablename__ = 'users' id: Mapped[int] = mapped_column(primary_key=True) email: Mapped[str] = mapped_column(String(255), index=True) status: Mapped[str] = mapped_column(String(50)) created_at: Mapped[datetime] = mapped_column(DateTime) `

postgresql

sqlalchemy

alembic

migrations