postgresql Rules
15 rules found for postgresql
Do not rename columns
When renaming columns in PostgreSQL, follow a safe migration pattern to avoid breaking changes to applications: 1. Create a new column 2. Update application code to write to both the old and new columns 3. Backfill data from the old column to the new column 4. Update application code to read from the new column instead of the old one 5. Once all deployments are complete, stop writing to the old column 6. Drop the old column in a later migration Bad: `sql ALTER TABLE users RENAME COLUMN some_column TO new_name; `
postgresql
mysql
drizzle
migrations
Do not rename tables
When renaming tables, use a multi-step approach instead of direct renaming to prevent downtime. 1. Create a new table 2. Write to both tables 3. Backfill data from the old table to the new table 4. Move reads from the old table to the new table 5. Stop writing to the old table 6. Drop the old table Bad: `sql ALTER TABLE users RENAME TO customers; `
postgresql
mysql
drizzle
migrations
Limit non unique indexes in PostgreSQL
Limit non-unique indexes to a maximum of three columns in PostgreSQL databases: Bad: `sql CREATE INDEX index_users_on_multiple_columns ON users (column_a, column_b, column_c, column_d); ` Good: `sql CREATE INDEX CONCURRENTLY index_users_on_selective_columns ON users (column_d, column_b); `
postgresql
drizzle
migrations
Only concurrent indexes in PostgreSQL
When creating indexes in PostgreSQL, always use the CONCURRENTLY option to prevent blocking writes during index creation. Bad: `sql CREATE INDEX idx_users_email ON users(email); ` Good: `sql CREATE INDEX CONCURRENTLY idx_users_email ON users(email); `
postgresql
drizzle
migrations
Always use JSONB in PostgreSQL
Always use jsonb instead of json data type when creating columns in PostgreSQL databases. Bad: `sql ALTER TABLE users ADD COLUMN properties json; ` Good: `sql ALTER TABLE users ADD COLUMN properties jsonb; `
postgresql
drizzle
migrations
Use check constraints for setting NOT NULL columns in PostgreSQL
When adding a NOT NULL constraint to an existing column in PostgreSQL, use a check constraint first to avoid blocking reads and writes while every row is checked. Bad: `sql -- This can cause performance issues with large tables ALTER TABLE users ALTER COLUMN some_column SET NOT NULL; ` Good: `sql -- Step 1: Add a check constraint without validation ALTER TABLE users ADD CONSTRAINT users_some_column_null CHECK (some_column IS NOT NULL) NOT VALID; -- Step 2: In a separate transaction, validate the constraint ALTER TABLE users VALIDATE CONSTRAINT users_some_column_null; -- Step 3: Add the NOT NULL constraint and remove the check constraint ALTER TABLE users ALTER COLUMN some_column SET NOT NULL; ALTER TABLE users DROP CONSTRAINT users_some_column_null; `
postgresql
drizzle
migrations
Split foreign keys in PostgreSQL
When adding foreign keys in Postgres migrations, split the operation into two steps to avoid blocking writes on both tables: 1. First create the foreign key constraint without validation 2. Then validate existing data in a separate migration Bad: `sql -- In a single migration ALTER TABLE users ADD CONSTRAINT fk_users_orders FOREIGN KEY (order_id) REFERENCES orders (id); ` Good: `sql -- In first migration: add without validating ALTER TABLE users ADD CONSTRAINT fk_users_orders FOREIGN KEY (order_id) REFERENCES orders (id) NOT VALID; -- In second migration: validate existing data ALTER TABLE users VALIDATE CONSTRAINT fk_users_orders; `
postgresql
drizzle
migrations
Split unique constraints in PostgreSQL
When adding unique constraints in PostgreSQL, create the unique index concurrently first before adding the constraint to avoid blocking reads and writes. Bad: `sql -- Creates a unique constraint directly, which blocks reads and writes ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE (email); ` Good: `sql -- First create a unique index concurrently (non-blocking) CREATE UNIQUE INDEX CONCURRENTLY users_email_unique_idx ON users (email); -- Then add the constraint using the existing index ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE USING INDEX users_email_unique_idx; `
postgresql
drizzle
migrations
Split check constraints
When adding check constraints in migrations, split the operation into two steps to avoid blocking writes during the table scan: 1. First create the check constraint without validation 2. Then validate existing data in a separate migration Bad: `sql -- In a single migration ALTER TABLE users ADD CONSTRAINT ck_users_age_positive CHECK (age >= 0); ` Good: `sql -- In first migration: add without validating ALTER TABLE users ADD CONSTRAINT ck_users_age_positive CHECK (age >= 0) NOT VALID; -- In second migration: validate existing data ALTER TABLE users VALIDATE CONSTRAINT ck_users_age_positive; `
postgresql
mysql
drizzle
migrations
Change column types safely in SQLAlchemy
When changing a column type that requires a table rewrite, follow these steps: 1. Create a new column with the desired type 2. Write to both columns during the transition period 3. Backfill data from the old column to the new column 4. Move reads from the old column to the new column 5. Stop writing to the old column 6. Drop the old column Bad: `python def upgrade(): # Directly changing a column type can cause table locks op.alter_column('users', 'some_column', type_=sa.String(50), existing_type=sa.Integer()) def downgrade(): op.alter_column('users', 'some_column', type_=sa.Integer(), existing_type=sa.String(50)) ` Good: `python Migration 1: Add new column def upgrade(): # Adding a new column first op.add_column('users', sa.Column('some_column_new', sa.String(50))) def downgrade(): op.drop_column('users', 'some_column_new') ` `python Migration 2: Complete the transition (after backfilling data) def upgrade(): # After ensuring all data is migrated op.drop_column('users', 'some_column') op.alter_column('users', 'some_column_new', new_column_name='some_column') def downgrade(): op.alter_column('users', 'some_column', new_column_name='some_column_new') op.add_column('users', sa.Column('some_column', sa.Integer())) `
postgresql
mysql
sqlalchemy
migrations
alembic
Limit non unique indexes in SQLAlchemy
Limit non-unique indexes to a maximum of three columns in PostgreSQL databases: Bad: `python def upgrade(): with op.get_context().autocommit_block(): op.create_index( 'index_users_on_multiple_columns', 'users', ['column_a', 'column_b', 'column_c', 'column_d'], postgresql_concurrently=True ) ` Good: `python def upgrade(): # Limit to most selective columns for better performance with op.get_context().autocommit_block(): op.create_index( 'index_users_on_selective_columns', 'users', ['column_d', 'column_b'], postgresql_concurrently=True ) `
postgresql
sqlalchemy
alembic
migrations
Only concurrent indexes in SQLAlchemy
When creating or dropping indexes in PostgreSQL using SQLAlchemy migrations, always use the postgresql_concurrently=True option within an autocommit block. This prevents blocking writes during index operations. For upgrade(): Bad: `python def upgrade(): op.create_index('idx_users_email', 'users', ['email']) ` Good: `python def upgrade(): with op.get_context().autocommit_block(): op.create_index('idx_users_email', 'users', ['email'], postgresql_concurrently=True) ` For downgrade(): Bad: `python def downgrade(): op.drop_index('idx_users_email', 'users') ` Good: `python def downgrade(): with op.get_context().autocommit_block(): op.drop_index('idx_users_email', 'users', postgresql_concurrently=True) `
postgresql
sqlalchemy
migrations
alembic
Add check constraints safely in SQLAlchemy
When adding check constraints that could affect large tables, create the constraint with NOT VALID first to avoid blocking writes during the validation scan. Bad: `python def upgrade(): # Directly creating a check constraint blocks writes during table scan op.create_check_constraint( 'ck_users_age_positive', 'users', 'age >= 0' ) ` Good: `python Migration 1: Create check constraint without validation def upgrade(): # Create the check constraint without validating existing data (non-blocking) op.create_check_constraint( 'ck_users_age_positive', 'users', 'age >= 0', postgresql_not_valid=True ) ` `python Migration 2: Validate existing data def upgrade(): op.execute('ALTER TABLE users VALIDATE CONSTRAINT ck_users_age_positive') `
postgresql
sqlalchemy
alembic
migrations
Add foreign keys safely in SQLAlchemy
When adding foreign keys in SQLAlchemy migrations, split the operation into two steps to avoid blocking writes on both tables: 1. First create the foreign key constraint without validation 2. Then validate existing data in a separate migration Bad: `python def upgrade(): # Directly creating a foreign key constraint can block writes on both tables op.create_foreign_key( 'fk_users_orders', 'users', 'orders', ['order_id'], ['id'] ) ` Good: `python Migration 1: Add foreign key without validation def upgrade(): # Create the foreign key constraint without validating existing data op.create_foreign_key( 'fk_users_orders', 'users', 'orders', ['order_id'], ['id'], postgresql_not_valid=True ) ` `python Migration 2: Validate existing data def upgrade(): op.execute('ALTER TABLE users VALIDATE CONSTRAINT fk_users_orders') `
postgresql
sqlalchemy
alembic
migrations
Add unique constraints safely in SQLAlchemy
When adding unique constraints that could affect large tables, create the unique index concurrently first to avoid blocking reads and writes during the migration. Bad: `python def upgrade(): # Directly creating a unique constraint can block reads and writes op.create_unique_constraint('users_email_unique', 'users', ['email']) ` Good: `python Migration 1: Create unique index concurrently def upgrade(): # Create the unique index concurrently (non-blocking) op.create_index( 'users_email_unique_idx', 'users', ['email'], unique=True, postgresql_concurrently=True ) ` `python Migration 2: Add constraint using existing index def upgrade(): # Add the unique constraint using the existing index op.create_unique_constraint( 'users_email_unique', 'users', ['email'], postgresql_using_index='users_email_unique_idx' ) `
postgresql
sqlalchemy
alembic
migrations