mvdmio.Database.PgSQL
0.22.2
dotnet add package mvdmio.Database.PgSQL --version 0.22.2
NuGet\Install-Package mvdmio.Database.PgSQL -Version 0.22.2
<PackageReference Include="mvdmio.Database.PgSQL" Version="0.22.2" />
<PackageVersion Include="mvdmio.Database.PgSQL" Version="0.22.2" />
<PackageReference Include="mvdmio.Database.PgSQL" />
paket add mvdmio.Database.PgSQL --version 0.22.2
#r "nuget: mvdmio.Database.PgSQL, 0.22.2"
#:package mvdmio.Database.PgSQL@0.22.2
#addin nuget:?package=mvdmio.Database.PgSQL&version=0.22.2
#tool nuget:?package=mvdmio.Database.PgSQL&version=0.22.2
mvdmio.Database.PgSQL
A .NET library that wraps Dapper and Npgsql to simplify PostgreSQL database interactions. Provides managed connections, transaction handling, bulk operations via PostgreSQL's binary COPY protocol, and a migration framework.
Targets .NET 8.0, .NET 9.0, and .NET 10.0.
Installation
# Main library
dotnet add package mvdmio.Database.PgSQL
# CLI tool for migrations (optional)
dotnet tool install --global mvdmio.Database.PgSQL.Tool
Quick Start
using mvdmio.Database.PgSQL;
// Create a connection
await using var db = new DatabaseConnection("Host=localhost;Database=mydb;Username=postgres;Password=secret");
// Execute a query
var users = await db.Dapper.QueryAsync<User>(
"SELECT * FROM users WHERE active = :active",
new Dictionary<string, object?> { ["active"] = true }
);
// Execute a command
await db.Dapper.ExecuteAsync(
"INSERT INTO users (name, email) VALUES (:name, :email)",
new Dictionary<string, object?> { ["name"] = "Alice", ["email"] = "alice@example.com" }
);
Core Concepts
DatabaseConnection
DatabaseConnection is the main entry point. It manages the underlying Npgsql connection and exposes three connectors:
| Connector | Purpose |
|---|---|
db.Dapper |
SQL queries and commands via Dapper |
db.Bulk |
High-performance bulk operations (COPY, upsert) |
db.Management |
Database management (schema/table existence checks) |
Connections are managed automatically. Each operation opens a connection if needed and closes it afterward. If a connection is already open (e.g., within a transaction), it is reused.
// From a connection string (creates and owns the NpgsqlDataSource)
await using var db = new DatabaseConnection("Host=localhost;Database=mydb;...");
// From an existing NpgsqlDataSource (does not take ownership)
await using var db = new DatabaseConnection(existingDataSource);
DatabaseConnectionFactory
For applications that create multiple connections to the same database, use DatabaseConnectionFactory. It caches NpgsqlDataSource instances per connection string, avoiding the overhead of creating a new data source for each connection. It also auto-registers all built-in Dapper type handlers.
using mvdmio.Database.PgSQL;
await using var factory = new DatabaseConnectionFactory();
// Each call reuses the same underlying NpgsqlDataSource
await using var db1 = factory.ForConnectionString("Host=localhost;Database=mydb;...");
await using var db2 = factory.ForConnectionString("Host=localhost;Database=mydb;...");
// Optionally configure the NpgsqlDataSourceBuilder
await using var db3 = factory.ForConnectionString("Host=localhost;Database=mydb;...", builder =>
{
builder.UseJsonNet();
});
Dependency Injection
Register enum type handlers for all enums in your assemblies:
services.AddEnumDapperTypeHandlers(typeof(MyEnum).Assembly);
This scans the given assemblies for all enum types and registers them as string-based type handlers so enums are stored as text in PostgreSQL.
Querying with Dapper
All Dapper methods are available through db.Dapper. Parameters are passed as IDictionary<string, object?> and referenced in SQL with the :paramName syntax.
Basic Queries
// Query multiple rows
var users = await db.Dapper.QueryAsync<User>(
"SELECT * FROM users WHERE department = :dept",
new Dictionary<string, object?> { ["dept"] = "Engineering" }
);
// Query a single row
var user = await db.Dapper.QuerySingleOrDefaultAsync<User>(
"SELECT * FROM users WHERE id = :id",
new Dictionary<string, object?> { ["id"] = 42 }
);
// Query the first row
var latest = await db.Dapper.QueryFirstAsync<User>(
"SELECT * FROM users ORDER BY created_at DESC LIMIT 1"
);
// Execute a scalar query
var count = await db.Dapper.ExecuteScalarAsync<int>(
"SELECT COUNT(*) FROM users WHERE active = :active",
new Dictionary<string, object?> { ["active"] = true }
);
Multi-Mapping
Map a single query to multiple types (e.g., for joins):
var usersWithCompany = await db.Dapper.QueryAsync<User, Company, UserWithCompany>(
"""
SELECT u.*, c.*
FROM users u
JOIN companies c ON u.company_id = c.id
WHERE c.name = :companyName
""",
splitOn: "id",
map: (user, company) => new UserWithCompany { User = user, Company = company },
parameters: new Dictionary<string, object?> { ["companyName"] = "Acme" }
);
Multi-mapping supports up to 6 types.
Multiple Result Sets
var (users, companies) = await db.Dapper.QueryMultipleAsync<(IEnumerable<User>, IEnumerable<Company>)>(
"SELECT * FROM users; SELECT * FROM companies;",
reader => (reader.Read<User>(), reader.Read<Company>())
);
Command Timeout
All methods accept an optional commandTimeout parameter:
var result = await db.Dapper.QueryAsync<Report>(
"SELECT * FROM generate_large_report()",
commandTimeout: TimeSpan.FromMinutes(5)
);
Column Name Mapping
The library automatically maps snake_case database columns to PascalCase C# properties. A column named first_name maps to a property named FirstName without any configuration.
Generated Repositories
Define a partial table class and let the package generate CRUD DTOs and a repository for you.
using mvdmio.Database.PgSQL;
using mvdmio.Database.PgSQL.Attributes;
[Table("public.users")]
public partial class UserTable
{
[PrimaryKey]
[Generated]
public long UserId { get; set; }
[Unique]
public string UserName { get; set; } = string.Empty;
[Column("first_name")]
public string FirstName { get; set; } = string.Empty;
}
This generates:
UserDatafor read operationsCreateUserCommandfor insertsUpdateUserCommandfor updatesIUserRepositoryfor mocking and unit testingUserRepositoryfor CRUD operations
Example usage:
await using var db = new DatabaseConnection("Host=localhost;Database=mydb;...");
var repository = new UserRepository(db);
var created = await repository.CreateAsync(new CreateUserCommand
{
UserName = "alice",
FirstName = "Alice"
});
var user = await repository.GetByUserNameAsync("alice");
var updated = await repository.UpdateAsync(new UpdateUserCommand
{
UserId = created.UserId,
UserName = "alice-updated",
FirstName = "Alicia"
});
await repository.DeleteByUserIdAsync(updated.UserId);
Rules:
- table classes should end with
Table; a warning is produced otherwise - table classes must be
partial - exactly one property must have
[PrimaryKey] [Unique]generatesGetBy...AsyncandDeleteBy...Asyncmethods[Generated]excludes columns from create/update payload generation where appropriate[Column(...)]overrides defaultsnake_casecolumn naming
Generated repositories can also be registered automatically with dependency injection:
services.AddUserDatabase();
That generated extension method is emitted into the root namespace of the consuming assembly and registers DatabaseConnectionFactory plus all generated repository abstractions such as IUserRepository → UserRepository, so application services can depend on interfaces without reflection-based scanning.
Extra diagnostics now catch unsupported table property shapes and collisions with generated type names such as UserRepository or UserData in the same namespace.
The package ships its Roslyn analyzer/source generator as a Roslyn component so IDEs can discover generated repository types more reliably during design-time analysis.
Transactions
Automatic Transaction Management
The simplest way to use transactions is with InTransactionAsync. It commits on success and rolls back on exception:
await db.InTransactionAsync(async () =>
{
await db.Dapper.ExecuteAsync("INSERT INTO orders (user_id, total) VALUES (:userId, :total)",
new Dictionary<string, object?> { ["userId"] = 1, ["total"] = 99.99 });
await db.Dapper.ExecuteAsync("UPDATE inventory SET quantity = quantity - 1 WHERE product_id = :productId",
new Dictionary<string, object?> { ["productId"] = 42 });
});
// With a return value
var orderId = await db.InTransactionAsync(async () =>
{
return await db.Dapper.ExecuteScalarAsync<long>(
"INSERT INTO orders (user_id, total) VALUES (:userId, :total) RETURNING id",
new Dictionary<string, object?> { ["userId"] = 1, ["total"] = 99.99 }
);
});
Manual Transaction Management
For more control, manage transactions explicitly:
await db.BeginTransactionAsync();
try
{
await db.Dapper.ExecuteAsync("INSERT INTO ...", parameters);
await db.Dapper.ExecuteAsync("UPDATE ...", parameters);
await db.CommitTransactionAsync();
}
catch
{
await db.RollbackTransactionAsync();
throw;
}
Transactions support safe nesting. Calling BeginTransactionAsync when a transaction is already active returns false and reuses the existing transaction. Only the outermost transaction controls commit/rollback.
Bulk Operations
The db.Bulk connector provides high-performance operations using PostgreSQL's binary COPY protocol. Column mappings define how C# objects are converted to database values.
Column Mappings
All bulk operations require a column-to-value mapping dictionary:
var columnMapping = new Dictionary<string, Func<Product, DbValue>>
{
{ "id", x => x.Id }, // Implicit conversion for common types
{ "name", x => x.Name },
{ "price", x => x.Price },
{ "created_at", x => x.CreatedAt }
};
DbValue has implicit conversions from string, bool, short, int, long, float, double, DateTime, DateTimeOffset, DateOnly, and TimeOnly. For explicit type control, use the constructor:
{ "metadata", x => new DbValue(x.Metadata, NpgsqlDbType.Jsonb) }
Bulk Copy
Insert large amounts of data efficiently using PostgreSQL's COPY protocol:
var products = GetProducts(); // IEnumerable<Product>
await db.Bulk.CopyAsync("products", products, columnMapping);
When you need an ephemeral staging table inside a transaction, copy into a temp table and reuse its name in later queries:
await db.InTransactionAsync(async () =>
{
var tempTableName = await db.Bulk.CopyToTempTableAsync(
products,
columnMapping,
tableName: "temp_products"
);
var matchingProducts = await db.Dapper.QueryAsync<Product>(
$"""
SELECT p.*
FROM products p
INNER JOIN {tempTableName} t ON t.id = p.id
"""
);
});
CopyToTempTableAsync creates a PostgreSQL temp table with columns derived from the provided DbValue types, loads the rows with binary COPY, and returns the table name. The temp table is created with ON COMMIT DROP, so use it within an active transaction or already-open connection.
For streaming scenarios, use a copy session:
var session = await db.Bulk.BeginCopyAsync<Product>("products", columnMapping);
foreach (var product in GetProductStream())
{
await session.WriteAsync(product);
}
await session.CompleteAsync();
Insert or Update (Upsert)
Performs a bulk upsert. Items that match an existing row (by conflict column) are updated; new items are inserted. Rows where no values actually changed are excluded from the result.
var results = await db.Bulk.InsertOrUpdateAsync(
"products",
onConflictColumn: "sku",
items: products,
columnValueMapping: columnMapping
);
foreach (var result in results)
{
if (result.IsInserted)
Console.WriteLine($"Inserted: {result.Item.Name}");
else if (result.IsUpdated)
Console.WriteLine($"Updated: {result.Item.Name}");
}
For composite unique constraints, pass multiple conflict columns:
var results = await db.Bulk.InsertOrUpdateAsync(
"order_items",
onConflictColumns: ["order_id", "product_id"],
items: orderItems,
columnValueMapping: orderItemMapping
);
For partial unique indexes, use UpsertConfiguration with a WHERE clause:
var config = new UpsertConfiguration
{
OnConflictColumns = ["email"],
OnConflictWhereClause = "deleted_at IS NULL" // Do not include the WHERE keyword
};
var results = await db.Bulk.InsertOrUpdateAsync("users", config, users, columnMapping);
Insert or Skip
Insert items that don't already exist, silently skipping conflicts. Returns only the newly inserted items.
var inserted = await db.Bulk.InsertOrSkipAsync(
"products",
onConflictColumn: "sku",
items: products,
columnValueMapping: columnMapping
);
Console.WriteLine($"Inserted {inserted.Count()} new products");
How Bulk Upsert/Skip Works
Under the hood, InsertOrUpdateAsync and InsertOrSkipAsync:
- Create a temporary table with the same structure as the target table.
- COPY all items into the temp table via binary COPY.
- Run
INSERT INTO ... SELECT ... ON CONFLICT DO UPDATE SET ...(orDO NOTHINGfor skip) from the temp table into the target table. - Return the affected rows.
For upserts, an IS DISTINCT FROM clause ensures that rows where no values actually changed are not counted as updates.
Database Management
Check for the existence of schemas and tables:
bool exists = await db.Management.TableExistsAsync("public", "users");
bool schemaExists = await db.Management.SchemaExistsAsync("analytics");
Schema Extraction
Extract the full database schema as an idempotent SQL script, or query individual schema objects programmatically. All methods exclude system schemas (pg_catalog, information_schema, pg_toast) and the mvdmio migration tracking schema.
Generate a Schema Script
var script = await db.Management.GenerateSchemaScriptAsync();
File.WriteAllText("schema.sql", script);
The generated script includes extensions, schemas, enum/composite/domain types, sequences, tables, constraints, indexes, functions/procedures, triggers, and views. The script header includes the current migration version (identifier and name) if migrations have been executed. All statements use idempotent syntax (IF NOT EXISTS, CREATE OR REPLACE, etc.) so the script can be run repeatedly against the same database without errors.
Query Individual Schema Objects
The db.Management.Schema property exposes methods for querying individual schema components:
var schemas = await db.Management.Schema.GetUserSchemasAsync();
var tables = await db.Management.Schema.GetTablesAsync();
var enums = await db.Management.Schema.GetEnumTypesAsync();
var sequences = await db.Management.Schema.GetSequencesAsync();
var constraints = await db.Management.Schema.GetConstraintsAsync();
var indexes = await db.Management.Schema.GetIndexesAsync();
var functions = await db.Management.Schema.GetFunctionsAsync();
var triggers = await db.Management.Schema.GetTriggersAsync();
var views = await db.Management.Schema.GetViewsAsync();
var extensions = await db.Management.Schema.GetExtensionsAsync();
var compositeTypes = await db.Management.Schema.GetCompositeTypesAsync();
var domainTypes = await db.Management.Schema.GetDomainTypesAsync();
var currentMigration = await db.Management.Schema.GetCurrentMigrationVersionAsync();
Each method returns strongly-typed objects. For example, GetTablesAsync() returns TableInfo objects with full column details including data types, nullability, defaults, and identity generation.
PostgreSQL LISTEN/NOTIFY
Wait for notifications on a PostgreSQL channel:
// Wait indefinitely
await db.WaitAsync("order_updates", cancellationToken);
// Wait with a timeout
await db.WaitAsync("order_updates", TimeSpan.FromSeconds(30), cancellationToken);
A separate dedicated connection is used for notifications, so they don't interfere with regular queries.
Migrations
Creating Migrations
Migrations implement IDbMigration. Name the class _{YYYYMMddHHmm}_{MigrationName} — the Identifier and Name properties are automatically extracted from the class name at runtime, so no manual property declarations are needed:
using mvdmio.Database.PgSQL;
using mvdmio.Database.PgSQL.Migrations.Interfaces;
// Class name drives both Identifier (202602161430) and Name ("AddUsersTable")
public class _202602161430_AddUsersTable : IDbMigration
{
public async Task UpAsync(DatabaseConnection db)
{
await db.Dapper.ExecuteAsync(
"""
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
"""
);
}
}
A Roslyn analyzer (PGSQL0001) will emit a warning if the class name does not match the required format _{YYYYMMddHHmm}_{MigrationName}. The solution keeps the analyzer test dependencies aligned with Roslyn 4.14 so builds stay restore-warning free. You can still override Identifier and Name manually if needed.
### Running Migrations Programmatically
```csharp
using mvdmio.Database.PgSQL.Migrations;
await using var db = new DatabaseConnection("Host=localhost;Database=mydb;...");
var migrator = new DatabaseMigrator(db, typeof(AddUsersTable).Assembly);
// Run all pending migrations
await migrator.MigrateDatabaseToLatestAsync();
// Or migrate to a specific version
await migrator.MigrateDatabaseToAsync(202602161430);
// Check which migrations have been executed
var executed = await migrator.RetrieveAlreadyExecutedMigrationsAsync();
Each migration runs in its own transaction. If a migration fails, its transaction is rolled back and a MigrationException is thrown. Executed migrations are tracked in the mvdmio.migrations table (automatically created).
When determining what is still pending, the migrator uses the highest recorded migration identifier as the cutoff. If the database contains a partial migration history, any missing records at or below that highest identifier are treated as already covered, and only migrations with newer identifiers are executed.
CLI Tool
The mvdmio.Database.PgSQL.Tool package provides a db CLI tool for managing migrations and extracting schema:
# Install the tool
dotnet tool install --global mvdmio.Database.PgSQL.Tool
# Initialize the configuration file
db init
# Create a new migration (generates a timestamped file)
db migration create AddUsersTable
# Run all pending migrations (uses the first configured environment)
db migrate latest
# Run migrations against a specific environment
db migrate latest --environment prod
db migrate latest -e acc
# Run migrations up to a specific version
db migrate to 202602161430
# Override connection string directly
db migrate latest --connection-string "Host=localhost;Database=mydb;..."
# Pull the current database schema into a schema.<env>.sql file
db pull
# Pull all environment schemas and remove obsolete migration files
db cleanup
# Pull from a specific environment
db pull --environment prod
db pull -e acc
# Pull using an explicit connection string (produces schema.sql)
db pull --connection-string "Host=localhost;Database=mydb;..."
Configuration
Create a .mvdmio-migrations.yml file in your project root:
project: src/MyApp.Data # Path to the project containing migrations
migrationsDirectory: Migrations # Directory for migration files (default: Migrations)
schemasDirectory: Schemas # Directory for schema files from db pull (default: Schemas)
connectionStrings: # The first entry is used when no --environment flag is passed
local: Host=localhost;Database=mydb;Username=postgres;Password=secret
acc: Host=acc-server;Database=mydb;Username=postgres;Password=secret
prod: Host=prod-server;Database=mydb;Username=postgres;Password=secret
Migrations are tracked in the mvdmio.migrations table (automatically created).
Multi-Environment Support
The connectionStrings section maps environment names to connection strings. Use the --environment (or -e) flag to select which environment to run against:
# Use the first configured environment (default behavior)
db migrate latest
# Target a specific environment
db migrate latest --environment prod
db migrate to 202602161430 -e acc
# Override the connection string entirely (ignores environments)
db migrate latest --connection-string "Host=custom;Database=mydb;..."
Connection string resolution priority:
--connection-stringflag (explicit override)--environment/-eflag (looks up fromconnectionStringsin config)- First entry in
connectionStrings(fallback) - Error if none of the above resolve
Schema-First Migrations
When migrating an empty database (one with no migrations applied), the migrator can automatically apply an embedded schema file instead of running all migrations individually. This significantly speeds up provisioning new database instances.
Automatic embedding:
The NuGet package includes a .props file that automatically embeds any .sql files in your Schemas/ folder as assembly resources. No manual configuration is needed - just place schema files in the Schemas/ folder and they are embedded on build.
How it works:
- When
db migrate latestordb migrate to <version>is run, the migrator checks if the database is empty (no migrations applied). - If empty, it scans the migrations assembly for embedded schema resources:
- For environment-based runs: looks for
schema.<environment>.sql(case-insensitive, e.g.,schema.local.sql) - Falls back to
schema.sqlif no environment-specific file is found - If no environment specified, picks the first available schema
- For environment-based runs: looks for
- If an embedded schema is found, it is applied instead of running individual migrations.
- The migration version from the schema file header is recorded in the migrations table.
- Any migrations newer than the schema version are then applied normally.
Generating schema files:
Use db pull to generate a schema file from an existing database:
# Pull schema from the default environment
db pull
# Pull from a specific environment
db pull --environment prod
The generated schema file is written to the Schemas/ directory and includes a header comment with the current migration version:
--
-- PostgreSQL database schema
-- Generated at 2026-02-18 10:30:45 UTC
-- Migration version: 202602161430 (AddUsersTable)
--
CREATE TABLE users (...);
-- ... rest of schema
On the next build, this file is automatically embedded as an assembly resource.
The CLI implementation now keeps db pull thin by delegating configuration lookup, schema export, and table-definition file generation to dedicated services, which also makes those seams easier to unit test.
The migrate commands now follow the same pattern: command parsing stays thin while shared services handle project loading, schema-resource inspection, and migration execution for both db migrate latest and db migrate to.
The table-definition scaffolder used by db pull is also now split into smaller internal helpers so naming, constraint parsing, and generated-file rendering can evolve independently without changing generated output.
Cleaning up obsolete migrations
Once every environment has advanced beyond a set of old migrations, you can remove those migration source files:
db cleanup
db cleanup pulls fresh schema files for every configured environment, parses the migration version from each schema header, finds the lowest migration version still present anywhere, and deletes migration .cs files older than that version. If any environment has no recorded migration version, cleanup is skipped so no required migrations are removed.
Workflow example:
# On your production database (after migrations are applied)
db pull --environment prod
# Creates Schemas/schema.prod.sql
# Build the project to embed the schema
dotnet build
# On a new development database
db migrate latest --environment local
# Detects empty database, discovers embedded schema.local.sql,
# applies it, then runs any migrations newer than the schema version
Programmatic usage:
using mvdmio.Database.PgSQL.Migrations;
await using var db = new DatabaseConnection(connectionString);
// Specify an environment to select the appropriate embedded schema
// Looks for schema.local.sql, falls back to schema.sql
var migrator = new DatabaseMigrator(
db,
environment: "local",
typeof(AddUsersTable).Assembly
);
// This will:
// 1. Check if the database is empty
// 2. If empty, discover and apply the embedded schema file
// 3. Record the migration version from the schema file header
// 4. Run any remaining migrations after the schema version
await migrator.MigrateDatabaseToLatestAsync();
// Or migrate to a specific version (schema is only applied if its version <= target)
await migrator.MigrateDatabaseToAsync(202602161430);
Without an environment, the migrator picks the first available embedded schema:
// Uses first available embedded schema, or runs migrations if none found
var migrator = new DatabaseMigrator(db, typeof(AddUsersTable).Assembly);
await migrator.MigrateDatabaseToLatestAsync();
The db migration create command scaffolds a migration file with the correct namespace and timestamp:
// Generated file: _202602161430_AddUsersTable.cs
using mvdmio.Database.PgSQL;
using mvdmio.Database.PgSQL.Migrations.Interfaces;
namespace MyApp.Data.Migrations;
public class _202602161430_AddUsersTable : IDbMigration
{
public async Task UpAsync(DatabaseConnection db)
{
await db.Dapper.ExecuteAsync(
"""
-- TODO: Write your migration SQL here
"""
);
}
}
Type Handlers
The library includes Dapper type handlers that are registered automatically when using DatabaseConnectionFactory:
| C# Type | PostgreSQL Type | Notes |
|---|---|---|
DateOnly |
DATE |
Workaround for Dapper not natively supporting DateOnly |
TimeOnly |
TIME |
Workaround for Dapper not natively supporting TimeOnly |
Uri |
TEXT |
Stored as absolute URI string |
Dictionary<string, string> |
JSONB |
Serialized/deserialized via System.Text.Json |
Custom Type Handlers
Register your own types for JSONB storage:
using Dapper;
using mvdmio.Database.PgSQL.Dapper.TypeHandlers.Base;
// Store any type as JSONB
SqlMapper.AddTypeHandler(new JsonbTypeHandler<MyComplexType>());
// Store enums as strings
SqlMapper.AddTypeHandler(new EnumAsStringTypeHandler<OrderStatus>());
Typed Query Parameters
When you need to specify an explicit PostgreSQL type for a query parameter:
using mvdmio.Database.PgSQL.Dapper.QueryParameters;
var parameters = new Dictionary<string, object?>
{
["data"] = new TypedQueryParameter(jsonString, NpgsqlDbType.Jsonb)
};
await db.Dapper.ExecuteAsync("INSERT INTO events (data) VALUES (:data)", parameters);
Error Handling
The library provides specific exception types:
QueryException-- Thrown when a SQL query fails. Contains theSqlproperty with the failed query text.MigrationException-- Thrown when a migration fails. Contains theMigrationproperty with the migration that failed.DatabaseException-- Base exception for all database-related errors.
try
{
await db.Dapper.ExecuteAsync("INVALID SQL");
}
catch (QueryException ex)
{
Console.WriteLine($"Query failed: {ex.Sql}");
Console.WriteLine($"Cause: {ex.InnerException?.Message}");
}
License
MIT -- see LICENSE for details.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- Dapper (>= 2.1.66)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 10.0.3)
- Npgsql (>= 10.0.1)
- Portable.System.DateTimeOnly (>= 9.0.1)
-
net8.0
- Dapper (>= 2.1.66)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 10.0.3)
- Npgsql (>= 10.0.1)
- Portable.System.DateTimeOnly (>= 9.0.1)
-
net9.0
- Dapper (>= 2.1.66)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 10.0.3)
- Npgsql (>= 10.0.1)
- Portable.System.DateTimeOnly (>= 9.0.1)
NuGet packages (2)
Showing the top 2 NuGet packages that depend on mvdmio.Database.PgSQL:
| Package | Downloads |
|---|---|
|
mvdmio.ASP.Jobs
Job scheduling library for use in ASP.NET projects. |
|
|
mvdmio.HealthCheck.Client
Client library for mvdm.io HealthCheck. |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 0.22.2 | 13 | 4/1/2026 |
| 0.22.0 | 90 | 3/26/2026 |
| 0.21.0 | 191 | 3/24/2026 |
| 0.20.0 | 95 | 3/11/2026 |
| 0.19.3 | 94 | 3/11/2026 |
| 0.19.0 | 91 | 3/10/2026 |
| 0.17.3 | 94 | 3/10/2026 |
| 0.17.2 | 91 | 3/10/2026 |
| 0.17.1 | 404 | 2/24/2026 |
| 0.17.0 | 107 | 2/24/2026 |
| 0.16.0 | 92 | 2/23/2026 |
| 0.15.0 | 88 | 2/23/2026 |
| 0.14.0 | 93 | 2/19/2026 |
| 0.13.0 | 91 | 2/19/2026 |
| 0.11.0 | 87 | 2/17/2026 |
| 0.10.0 | 90 | 2/17/2026 |
| 0.9.4 | 90 | 2/17/2026 |
| 0.9.3 | 84 | 2/17/2026 |
| 0.9.2 | 87 | 2/17/2026 |
| 0.9.1 | 85 | 2/17/2026 |