AmpScm.RepoDb.PostgreSql.BulkOperations
1.14.0-alphaSD02
dotnet add package AmpScm.RepoDb.PostgreSql.BulkOperations --version 1.14.0-alphaSD02
NuGet\Install-Package AmpScm.RepoDb.PostgreSql.BulkOperations -Version 1.14.0-alphaSD02
<PackageReference Include="AmpScm.RepoDb.PostgreSql.BulkOperations" Version="1.14.0-alphaSD02" />
paket add AmpScm.RepoDb.PostgreSql.BulkOperations --version 1.14.0-alphaSD02
#r "nuget: AmpScm.RepoDb.PostgreSql.BulkOperations, 1.14.0-alphaSD02"
// Install AmpScm.RepoDb.PostgreSql.BulkOperations as a Cake Addin #addin nuget:?package=AmpScm.RepoDb.PostgreSql.BulkOperations&version=1.14.0-alphaSD02&prerelease // Install AmpScm.RepoDb.PostgreSql.BulkOperations as a Cake Tool #tool nuget:?package=AmpScm.RepoDb.PostgreSql.BulkOperations&version=1.14.0-alphaSD02&prerelease
RepoDb.PostgreSql.BulkOperations
An extension library that contains the official Bulk Operations of RepoDB for PostgreSQL.
Important Pages
- GitHub Home Page - to learn more about the core library.
- Website - docs, features, classes, references, releases and blogs.
Why use the Bulk Operations?
Basically, you do the normal Delete, Insert, Merge and Update operations when interacting with the database. Through this, the data is processed in an atomic way.
If you use the Batch Operations (the process of wrapping the multiple single operations and executing one-go), it does not completely eliminate the round-trips between your application and your database, thus does not give you the maximum performance during the CRUD operations.
With the Bulk Operations, all data is brought from your client application towards your database in one-go via the BinaryImport operation (a real bulk process). It then post processed the data altogether in the database server to maximize the performance.
During the operation, the process ignores the audit, logs, constraints and any other database special handling. It hugely improve the performance of your application by more than 90%, especially when processing the large datasets.
Core Features
- Special Arguments
- Async Methods
- Caveats
- BinaryBulkDelete
- BinaryBulkDeleteByKey
- BinaryBulkInsert
- BinaryBulkMerge
- BinaryBulkUpdate
Community Engagements
- GitHub - for any issues, requests and problems.
- StackOverflow - for any technical questions.
- Twitter - for the latest news.
- Gitter Chat - for direct and live Q&A.
License
Apache-2.0 - Copyright © 2020 - Michael Camara Pendon
Installation
At the Package Manager Console, write the command below.
> Install-Package RepoDb.PostgreSql.BulkOperations
Then call the bootstrapper once.
RepoDb.PostgreSqlBootstrap.Initialize();
Or, visit our installation page for more information.
Special Arguments
The arguments qualifiers
, keepIdentity
, identityBehavior
, pseudoTableType
and mergeCommanType
were provided in most operations.
The argument qualifiers
is used to define the qualifier fields to be used in the operations. It usually refers to the WHERE
expression of SQL Statements. If not given, the primary key field will be used.
The argument keepIdentity
is used to define a value whether the identity property of the entity/model will be kept during the operation.
The argument identityBehavior
is used to define a value like with the keepIdentity
argument, together-with, a value that is used to return the newly generated identity values from the database.
The argument pseudoTableType
is used to define a value whether a physical pseudo-table will be created during the operation. By default, a temporary table is used.
The argument mergedCommandType
is used to define a value whether the existing ON CONFLICT DO UPDATE
will be used over the UPDATE/INSERT
SQL commands during operations.
Identity Setting Alignment
Behind the scene, the library has enforced an additional logic to ensure the identity setting alignment. Basically, a new column named __RepoDb_OrderColumn
is being added into the pseudo-temporary table if the identity field is present on the underlying table. This column will contain the actual index of the entity model from the IEnumerable<T>
object.
During the bulk operation, a dedicated index (entity model index) value is passed to this column, thus ensuring that the index value is really equating to the index of the item from the IEnumerable<T>
object. The resultsets of the pseudo-temporary table are being ordered using this column, prior the actual merge to the underlying table.
For both the BinaryBulkInsert and BinaryBulkMerge operations, when the newly generated identity value is being set back to the data model, the value of the __RepoDb_OrderColumn
column is being used to look-up the proper index of the equating item from the IEnumerable<T>
object, then, the compiled identity-setter function is used to assign back the identity value into the identity property.
BatchSize
All the provided operations has a batchSize
attribute that enables you to override the size of the items being wired-up to the server during the operation. By default it is null
, all the items are being sent together in one-go.
Use this attribute if you wish to optimize the operation based on certain sitution (i.e.: No. of Columns, Type/Size of Data, Network Latency).
Async Methods
All the provided synchronous operations has its equivalent asynchronous (Async) operations.
BinaryBulkDelete
Delete the existing rows from the database by bulk. It returns the number of rows that has been deleted during the operation.
BinaryBulkDelete via DataEntities
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var deletedRows = connection.BinaryBulkDelete<Customer>(customers);
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var deletedRows = connection.BinaryBulkDelete<Customer>(customers, qualifiers: e => new { e.LastName, e.DateOfBirth });
}
Or via table-name.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var deletedRows = connection.BinaryBulkDelete("Customer", customers);
}
And with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var deletedRows = connection.BinaryBulkDelete("Customer", customers, qualifiers: Field.From("LastName", "DateOfBirth"));
}
BinaryBulkDelete via DataTable
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var deletedRows = connection.BinaryBulkDelete("Customer", table);
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var deletedRows = connection.BinaryBulkDelete("Customer", table, qualifiers: Field.From("LastName", "DateOfBirth"));
}
BinaryBulkDelete via DbDataReader
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var deletedRows = connection.BinaryBulkDelete("Customer", reader);
}
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var deletedRows = connection.BinaryBulkDelete("Customer", reader, qualifiers: Field.From("LastName", "DateOfBirth"));
}
}
BinaryBulkDeleteByKey
Delete the existing rows from the database by bulk via a list of primary keys. It returns the number of rows that has been deleted during the operation.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var primaryKeys = new [] { 1, 2, ..., 10045 };
var deletedRows = connection.BinaryBulkDeleteByKey(primaryKeys);
}
BinaryBulkInsert
Insert a list of entities into the database by bulk. It returns the number of rows that has been inserted in the database.
BinaryBulkInsert via DataEntities
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var insertedRows = connection.BinaryBulkInsert<Customer>(customers);
}
Or via table-name.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var insertedRows = connection.BinaryBulkInsert("Customer", customers);
}
BinaryBulkInsert via DataTable
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var insertedRows = connection.BinaryBulkInsert("Customer", table);
}
BinaryBulkInsert via DbDataReader
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var insertedRows = connection.BinaryBulkInsert("Customer", reader);
}
}
BinaryBulkMerge
Merge a list of entities into the database by bulk. A new row is being inserted (if not present) and an existing row is being updated (if present) through the defined qualifiers. It returns the number of rows that has been inserted/updated in the database.
BinaryBulkMerge via DataEntities
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var mergedRows = connection.BinaryBulkMerge<Customer>(customers);
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var mergedRows = connection.BinaryBulkMerge<Customer>(customers, qualifiers: e => new { e.LastName, e.DateOfBirth });
}
Or via table-name.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var mergedRows = connection.BinaryBulkMerge("Customer", customers);
}
And with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var mergedRows = connection.BinaryBulkMerge("Customer", customers, qualifiers: Field.From("LastName", "DateOfBirth"));
}
BinaryBulkMerge via DataTable
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var mergedRows = connection.BinaryBulkMerge("Customer", table);
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var mergedRows = connection.BinaryBulkMerge("Customer", table, qualifiers: Field.From("LastName", "DateOfBirth"));
}
BinaryBulkMerge via DbDataReader
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var mergedRows = connection.BinaryBulkMerge("Customer", reader);
}
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var mergedRows = connection.BinaryBulkMerge("Customer", reader, qualifiers: Field.From("LastName", "DateOfBirth"));
}
}
BinaryBulkUpdate
Update the existing rows from the database by bulk. The affected rows are strongly bound to the values of the qualifier fields when calling the operation. It returns the number of rows that has been updated in the database.
BinaryBulkUpdate via DataEntities
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var rows = connection.BinaryBulkUpdate<Customer>(customers);
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var rows = connection.BinaryBulkUpdate<Customer>(customers, qualifiers: e => new { e.LastName, e.DateOfBirth });
}
Or via table-name.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var rows = connection.BinaryBulkUpdate("Customer", customers);
}
And with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var customers = GetCustomers();
var rows = connection.BinaryBulkUpdate("Customer", customers, qualifiers: Field.From("LastName", "DateOfBirth"));
}
BinaryBulkUpdate via DataTable
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var rows = connection.BinaryBulkUpdate("Customer", table);
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
var table = GetCustomersAsDataTable();
var rows = connection.BinaryBulkUpdate("Customer", table, qualifiers: Field.From("LastName", "DateOfBirth"));
}
BinaryBulkUpdate via DbDataReader
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var rows = connection.BinaryBulkUpdate("Customer", reader);
}
}
Or with qualifiers.
using (var connection = new NpgsqlConnection(ConnectionString))
{
using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
{
var rows = connection.BinaryBulkUpdate("Customer", reader, qualifiers: Field.From("LastName", "DateOfBirth"));
}
}
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.0
- AmpScm.RepoDb (>= 1.14.0-alphaSD02)
- AmpScm.RepoDb.PostgreSql (>= 1.14.0-alphaSD02)
- Microsoft.Bcl.AsyncInterfaces (>= 8.0.0)
- Microsoft.Bcl.HashCode (>= 1.1.1)
- Npgsql (>= 7.0.8)
- System.ComponentModel.Annotations (>= 5.0.0)
- System.Data.Common (>= 4.3.0)
- System.Dynamic.Runtime (>= 4.3.0)
- System.Linq.Async (>= 6.0.1)
- System.Memory (>= 4.5.5)
- System.Reflection.Emit (>= 4.7.0)
- System.Reflection.Emit.Lightweight (>= 4.7.0)
- System.Reflection.TypeExtensions (>= 4.7.0)
- System.Text.Json (>= 8.0.5)
- System.Text.RegularExpressions (>= 4.3.1)
-
net8.0
- AmpScm.RepoDb (>= 1.14.0-alphaSD02)
- AmpScm.RepoDb.PostgreSql (>= 1.14.0-alphaSD02)
- Microsoft.Bcl.AsyncInterfaces (>= 8.0.0)
- Npgsql (>= 7.0.8)
- System.ComponentModel.Annotations (>= 5.0.0)
- System.Data.Common (>= 4.3.0)
- System.Dynamic.Runtime (>= 4.3.0)
- System.Linq.Async (>= 6.0.1)
- System.Reflection.Emit (>= 4.7.0)
- System.Reflection.Emit.Lightweight (>= 4.7.0)
- System.Reflection.TypeExtensions (>= 4.7.0)
- System.Text.Json (>= 8.0.5)
- System.Text.RegularExpressions (>= 4.3.1)
-
net9.0
- AmpScm.RepoDb (>= 1.14.0-alphaSD02)
- AmpScm.RepoDb.PostgreSql (>= 1.14.0-alphaSD02)
- Microsoft.Bcl.AsyncInterfaces (>= 8.0.0)
- Npgsql (>= 7.0.8)
- System.ComponentModel.Annotations (>= 5.0.0)
- System.Data.Common (>= 4.3.0)
- System.Dynamic.Runtime (>= 4.3.0)
- System.Linq.Async (>= 6.0.1)
- System.Reflection.Emit (>= 4.7.0)
- System.Reflection.Emit.Lightweight (>= 4.7.0)
- System.Reflection.TypeExtensions (>= 4.7.0)
- System.Text.Json (>= 8.0.5)
- System.Text.RegularExpressions (>= 4.3.1)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
1.14.0-alphaSD02 | 35 | 11/22/2024 |