EFCore.BulkExtensions 6.7.1

There is a newer version of this package available.
See the version list below for details.
dotnet add package EFCore.BulkExtensions --version 6.7.1
NuGet\Install-Package EFCore.BulkExtensions -Version 6.7.1
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="EFCore.BulkExtensions" Version="6.7.1" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add EFCore.BulkExtensions --version 6.7.1
#r "nuget: EFCore.BulkExtensions, 6.7.1"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install EFCore.BulkExtensions as a Cake Addin
#addin nuget:?package=EFCore.BulkExtensions&version=6.7.1

// Install EFCore.BulkExtensions as a Cake Tool
#tool nuget:?package=EFCore.BulkExtensions&version=6.7.1

EFCore.BulkExtensions

EntityFrameworkCore extensions: <br> -Bulk operations: Insert, Update, Delete, Read, Upsert, Sync, SaveChanges (extremely fast)<br> -Batch ops: Delete, Update - will be Deprecated since EF7 has native Execute-Up/Del; and Truncate.<br> Library is Lightweight and very Efficient, having all mostly used CRUD operation.<br> Was selected in top 20 EF Core Extensions recommended by Microsoft.<br> Latest version is using EF Core 7.<br> Supports all 4 mayor databases: SQLServer, PostgreSQL, MySQL, SQLite

License

  • BulkExtensions are licensed under the Dual License v1.0 which you must Buy if do not meet criteria for free usage (license also includes active support).

Support

If you find this project useful you can mark it by leaving a Github Star ⭐.</br> OpenSource needs funding, so even for free usage if you can please consider making a DONATION:<br> "Buy Me A Coffee" _ or _ Button:zap:<br>

Contributing

Please read CONTRIBUTING for details on code of conduct, and the process for submitting pull requests.<br> When opening issues do write detailed explanation of the problem or feature with reproducible example.<br>

Description

Supported databases:<br> -SQLServer (or SqlAzure) under the hood uses SqlBulkCopy for Insert, Update/Delete = BulkInsert + raw Sql MERGE.<br> -PostgreSQL (9.5+) is using COPY BINARY combined with ON CONFLICT for Update.<br> -MySQL (8+) is using MySqlBulkCopy combined with ON DUPLICATE for Update.<br> -SQLite has no Copy tool, instead library uses plain SQL combined with UPSERT.<br> Bulk Tests can not have UseInMemoryDb because InMemoryProvider does not support Relational-specific methods.<br> Instead Test options are SqlServer(Developer or Express), LocalDb(if alongside Developer v.), or for other adapters PostgreSQL/MySQL/SQLite.

Installation

Available on <a href="https://www.nuget.org/packages/EFCore.BulkExtensions/"><img src="https://buildstats.info/nuget/EFCore.BulkExtensions" /></a><br> That is main nuget for all Databases, there are also specific ones with single provider for those who need small packages.<br> Only single specific can be installed in a project, if need more then use main one with all providers.<br> Package manager console command for installation: Install-Package EFCore.BulkExtensions<br> Specific ones have adapter sufix: MainNuget + .SqlServer/PostgreSql/MySql/Sqlite<br> Its assembly is Strong-Named and Signed with a key. | Nuget | Target | Used EF v. | For projects targeting | | ----- | --------------- | ----------- | ------------------------------- | | 7.x | Net 6.0 | EF Core 7.0 | Net 7.0+ or 6.0+ | | 6.x | Net 6.0 | EF Core 6.0 | Net 6.0+ | | 5.x | NetStandard 2.1 | EF Core 5.0 | Net 5.0+ | | 3.x | NetStandard 2.0 | EF Core 3.n | NetCore(3.0+) or NetFrm(4.6.1+) MoreInfo| | 2.x | NetStandard 2.0 | EF Core 2.n | NetCore(2.0+) or NetFrm(4.6.1+) | | 1.x | NetStandard 1.4 | EF Core 1.0 | NetCore(1.0+) |

Supports follows official .Net lifecycle, currently v.7 as latest and v.6 as LTS.

Usage

It's pretty simple and straightforward.<br> Bulk Extensions are made on DbContext and are used with entities List (supported both regular and Async methods):

context.BulkInsert(entities);                 context.BulkInsertAsync(entities);
context.BulkInsertOrUpdate(entities);         context.BulkInsertOrUpdateAsync(entities);         // Upsert
context.BulkInsertOrUpdateOrDelete(entities); context.BulkInsertOrUpdateOrDeleteAsync(entities); // Sync
context.BulkUpdate(entities);                 context.BulkUpdateAsync(entities);
context.BulkDelete(entities);                 context.BulkDeleteAsync(entities);
context.BulkRead(entities);                   context.BulkReadAsync(entities);
context.BulkSaveChanges();                    context.BulkSaveChangesAsync();

-SQLite requires package: SQLitePCLRaw.bundle_e_sqlite3 with call to SQLitePCL.Batteries.Init()<br> -MySQL when want to run Test on it for the first time execute sql command: SET GLOBAL local_infile = true;

Batch Extensions are made on IQueryable DbSet and can be used as in the following code segment.<br> They are done as pure sql and no check is done whether some are prior loaded in memory and are being Tracked.<br> (updateColumns is optional param in which PropertyNames added explicitly when need update to it's default value)<br> Info about lock-escalation in SQL Server with Batch iteration example as a solution at the bottom of code segment.

// Delete
context.Items.Where(a => a.ItemId >  500).BatchDelete();
context.Items.Where(a => a.ItemId >  500).BatchDeleteAsync();

// Update (using Expression arg.) supports Increment/Decrement 
context.Items.Where(a => a.ItemId <= 500).BatchUpdate(a => new Item { Quantity = a.Quantity + 100 });
context.Items.Where(a => a.ItemId <= 500).BatchUpdateAsync(a => new Item { Quantity = a.Quantity + 100});
  // can be as value '+100' or as variable '+incrementStep' (int incrementStep = 100;)
  
// Update (via simple object)
context.Items.Where(a => a.ItemId <= 500).BatchUpdate(new Item { Description = "Updated" });
context.Items.Where(a => a.ItemId <= 500).BatchUpdateAsync(new Item { Description = "Updated" });
// Update (via simple object) - requires additional Argument for setting to Property default value
var updateCols = new List<string> { nameof(Item.Quantity) }; // Update 'Quantity' to default value ('0')
var q = context.Items.Where(a => a.ItemId <= 500);
int affected = q.BatchUpdate(new Item { Description="Updated" }, updateCols); // result assigned to aff.

// Batch iteration (useful in same cases to avoid lock escalation)
do {
    rowsAffected = query.Take(chunkSize).BatchDelete();
} while (rowsAffected >= chunkSize);

// Truncate
context.Truncate<Entity>();
context.TruncateAsync<Entity>();

Performances

Following are performances (in seconds)

  • For SQL Server (v. 2019):
Ops\Rows EF 100K Bulk 100K EF 1 MIL. Bulk 1 MIL.
Insert 11 s 3 s 60 s 15 s
Update 8 s 4 s 84 s 27 s
Delete 50 s 3 s 5340 s 15 s

TestTable has 6 columns (Guid, string x2, int, decimal?, DateTime), all inserted and 2 were updated.<br> Test done locally on configuration: INTEL i7-10510U CPU 2.30GHz, DDR3 16 GB, SSD SAMSUNG 512 GB.<br> For small data sets there is an overhead since most Bulk ops need to create Temp table and also Drop it after finish.<br> Probably good advice would be to use Bulk ops for sets greater than 1000.

Bulk info

If Windows Authentication is used then in ConnectionString there should be Trusted_Connection=True; because Sql credentials are required to stay in connection.<br>

When used directly each of these operations are separate transactions and are automatically committed.<br> And if we need multiple operations in single procedure then explicit transaction should be used, for example:<br>

using (var transaction = context.Database.BeginTransaction())
{
    context.BulkInsert(entities1List);
    context.BulkInsert(entities2List);
    transaction.Commit();
}

BulkInsertOrUpdate method can be used when there is need for both operations but in one connection to database.<br> It makes Update when PK(PrimaryKey) is matched, otherwise does Insert.<br>

BulkInsertOrUpdateOrDelete effectively synchronizes table rows with input data.<br> Those in Db that are not found in the list will be deleted.<br> Partial Sync can be done on table subset using expression set on config with method:<br> bulkConfig.SetSynchronizeFilter<Item>(a => a.Quantity > 0);<br> Not supported for SQLite (Lite has only UPSERT statement) nor currently for PostgreSQL. Way to achieve there sync functionality is to Select or BulkRead existing data from DB, split list into sublists and call separately Bulk methods for BulkInsertOrUpdate and Delete.

BulkRead (SELECT and JOIN done in Sql)<br> Useful when need to Select from big List based on Unique Prop./Columns specified in config UpdateByProperties<br>

// instead of WhereIN which will TimeOut for List with over around 40 K records
var entities = context.Items.Where(a => itemsNames.Contains(a.Name)).AsNoTracking().ToList(); // SQL IN
// or JOIN in Memory that loads entire table
var entities = context.Items.Join(itemsNames, a => a.Name, p => p, (a, p) => a).AsNoTracking().ToList();

// USE
var items = itemsNames.Select(a => new Item { Name = a }).ToList(); // Items list with only Name set
var bulkConfig = new BulkConfig { UpdateByProperties = new List<string> { nameof(Item.Name) } };
context.BulkRead(items, bulkConfig); // Items list will be loaded from Db with data(other properties)

Example of special use case when need to BulkRead child entities after BulkReading parent list.

SaveChanges uses Change Tracker to find all modified(CUD) entities and call proper BulkOperations for each table.<br> Because it needs tracking it is slower then pure BulkOps but still much faster then regular SaveChanges.<br> With config OnSaveChangesSetFK setting FKs can be controlled depending on whether PKs are generated in Db or in memory.<br> Support for this method was added in version 6 of the library.<br> Before calling this method newly created should be added into Range:

context.Items.AddRange(newEntities); // if newEntities is parent list it can have child sublists
context.BulkSaveChanges();

Practical general usage could be made in a way to override regular SaveChanges and if any list of Modified entities entries is greater then say 1000 to redirect to Bulk version.

Note: Bulk ops have optional argument Type type that can be set to type of Entity if list has dynamic runtime objects or is inherited from Entity class.

BulkConfig arguments

Bulk methods can have optional argument BulkConfig with properties (bool, int, object, List<string>):<br>

PROPERTY : DEFAULTvalue
----------------------------------------------------------------------------------------------
PreserveInsertOrder: true,                    PropertiesToInclude: null,
SetOutputIdentity: false,                     PropertiesToIncludeOnCompare: null,
BatchSize: 2000,                              PropertiesToIncludeOnUpdate: null,
NotifyAfter: null,                            PropertiesToExclude: null,
BulkCopyTimeout: null,                        PropertiesToExcludeOnCompare: null,
EnableStreaming: false,                       PropertiesToExcludeOnUpdate: null,
UseTempDB: false,                             UpdateByProperties: null,
UniqueTableNameTempDb: true,                  EnableShadowProperties: false,
CustomDestinationTableName: null,             IncludeGraph: false,
CustomSourceTableName: null,                  OmitClauseExistsExcept: false,
CustomSourceDestinationMappingColumns: null,  DoNotUpdateIfTimeStampChanged: false,
TrackingEntities: false,                      SRID: 4326,
WithHoldlock: true,                           DateTime2PrecisionForceRound: false,
CalculateStats: false,                        TemporalColumns: { "PeriodStart", "PeriodEnd" },
SqlBulkCopyOptions: Default,                  OnSaveChangesSetFK: true,
SqlBulkCopyColumnOrderHints: null,            IgnoreGlobalQueryFilters: false,
OnConflictUpdateWhereSql: null,               ReplaceReadEntities: false,
----------------------------------------------------------------------------------------------
METHOD: SetSynchronizeFilter<T>
        SetSynchronizeSoftDelete<T>

If we want to change defaults, BulkConfig should be added explicitly with one or more bool properties set to true, and/or int props like BatchSize to different number.<br> Config also has DelegateFunc for setting Underlying-Connection/Transaction, e.g. in UnderlyingTest.<br> When doing update we can chose to exclude one or more properties by adding their names into PropertiesToExclude, or if we need to update less then half column then PropertiesToInclude can be used. Setting both Lists are not allowed.

When using the BulkInsert_/OrUpdate methods, you may also specify the PropertiesToIncludeOnCompare and PropertiesToExcludeOnCompare properties (only for SqlServer). By adding a column name to the PropertiesToExcludeOnCompare, will allow it to be inserted and updated but will not update the row if any of the other columns in that row did not change. For example, if you are importing bulk data and want to remove from comparison an internal CreateDate or UpdateDate, you add those columns to the PropertiesToExcludeOnCompare.<br> Another option that may be used in the same scenario are the PropertiesToIncludeOnUpdate and PropertiesToExcludeOnUpdate properties. These properties will allow you to specify insert-only columns such as CreateDate and CreatedBy.

If we want Insert only new and skip existing ones in Db (Insert_if_not_Exist) then use BulkInsertOrUpdate with config PropertiesToIncludeOnUpdate = new List<string> { "" }

Additionally there is UpdateByProperties for specifying custom properties, by which we want update to be done.<br> When setting multiple props in UpdateByProps then match done by columns combined, like unique constrain based on those cols.<br> Using UpdateByProperties while also having Identity column requires that Id property be Excluded.<br> Also with PostgreSQL when matching is done it requires UniqueIndex so for custom UpdateByProperties that do not have Un.Ind., it is temporarily created in which case method can not be in transaction (throws: current transaction is aborted; CREATE INDEX CONCURRENTLY cannot run inside a transaction block).<br> Similar is done with MySQL by temporarily adding UNIQUE CONSTRAINT.<br>

If NotifyAfter is not set it will have same value as BatchSize while BulkCopyTimeout when not set has SqlBulkCopy default which is 30 seconds and if set to 0 it indicates no limit.<br><br> SetOutputIdentity have purpose only when PK has Identity (usually int type with AutoIncrement), while if PK is Guid(sequential) created in Application there is no need for them.<br> Also Tables with Composite Keys have no Identity column so no functionality for them in that case either.

var bulkConfig = new BulkConfig { SetOutputIdentity = true, BatchSize = 4000 };
context.BulkInsert(entities, bulkConfig);
context.BulkInsertOrUpdate(entities, new BulkConfig { SetOutputIdentity = true });
context.BulkInsertOrUpdate(entities, b => b.SetOutputIdentity = true); // e.g. BulkConfig with Action arg.

PreserveInsertOrder is true by default and makes sure that entities are inserted to Db as ordered in entitiesList.<br> When table has Identity column (int autoincrement) with 0 values in list they will temporary be automatically changed from 0s into range -N:-1.<br> Or it can be manually set with proper values for order (Negative values used to skip conflict with existing ones in Db).<br> Here single Id value itself doesn't matter, db will change it to next in sequence, what matters is their mutual relationship for sorting.<br> Insertion order is implemented with TOP in conjunction with ORDER BY. stackoverflow:merge-into-insertion-order.<br> This config should remain true when SetOutputIdentity is set to true on Entity containing NotMapped Property. issues/76<br> When using SetOutputIdentity Id values will be updated to new ones from database.<br> With BulkInsertOrUpdate on SQLServer for those that will be updated it has to match with Id column, or other unique column(s) if using UpdateByProperties in which case orderBy is done with those props instead of ID, due to how Sql MERGE works. To preserve insert order by Id in this case alternative would be first to use BulkRead and find which records already exist, then split the list into 2 lists entitiesForUpdate and entitiesForInsert without configuring UpdateByProps).<br> Also for SQLite combination of BulkInsertOrUpdate and IdentityId automatic set will not work properly since it does not have full MERGE capabilities like SqlServer. Instead list can be split into 2 lists, and call separately BulkInsert and BulkUpdate.<br>

SetOutputIdentity is useful when BulkInsert is done to multiple related tables, that have Identity column.<br> After Insert is done to first table, we need Id-s (if using Option 1) that were generated in Db because they are FK(ForeignKey) in second table.<br> It is implemented with OUTPUT as part of MERGE Query, so in this case even the Insert is not done directly to TargetTable but to TempTable and then Merged with TargetTable.<br> When used Id-s will be updated in entitiesList, and if PreserveInsertOrder is set to false then entitiesList will be cleared and reloaded.<br> Example of SetOutputIdentity with parent-child FK related tables:

int numberOfEntites = 1000;
var entities = new List<Item>();
var subEntities = new List<ItemHistory>();
for (int i = 1; i <= numberOfEntites; i++)
{
    var entity = new Item { Name = $"Name {i}" };
    entity.ItemHistories = new List<ItemHistory>()
    {
        new ItemHistory { Remark = $"Info {i}.1" },
        new ItemHistory { Remark = $"Info {i}.2" }
    };
    entities.Add(entity);
}

// Option 1
using (var transaction = context.Database.BeginTransaction())
{
    context.BulkInsert(entities, new BulkConfig { SetOutputIdentity = true });
    foreach (var entity in entities) {
        foreach (var subEntity in entity.ItemHistories) {
            subEntity.ItemId = entity.ItemId; // sets FK to match its linked PK that was generated in DB
        }
        subEntities.AddRange(entity.ItemHistories);
    }
    context.BulkInsert(subEntities);
    transaction.Commit();
}

// Option 2 using Graph (only for SQL Server)
// - all entities in relationship with main ones in list are BulkInsertUpdated
context.BulkInsert(entities, b => b.IncludeGraph = true);
  
// Option 3 with BulkSaveChanges() - uses ChangeTracker so little slower then direct Bulk
context.Items.AddRange(entities);
context.BulkSaveChanges();

When CalculateStats set to True the result returned in BulkConfig.StatsInfo (StatsNumber-Inserted/Updated/Deleted).<br> If used for pure Insert (with Batching) then SetOutputIdentity should also be configured because Merge is required.<br> TrackingEntities can be set to True if we want to have tracking of entities from BulkRead or if SetOutputIdentity is set.<br> UseTempDB when set then BulkOperation has to be inside Transaction.<br> UniqueTableNameTempDb when changed to false temp table name will be only 'Temp' without random numbers.<br> CustomDestinationTableName can be set with 'TableName' only or with 'Schema.TableName'.<br> CustomSourceTableName when set enables source data from specified table already in Db, so input list not used and can be empty.<br> CustomSourceDestinationMappingColumns dict can be set only if CustomSourceTableName is configured and it is used for specifying Source-Destination column names when they are not the same. Example in test DestinationAndSourceTableNameTest.<br> EnableShadowProperties to add (normal) Shadow Property and persist value. Disables automatic discriminator, use manual method.<br> IncludeGraph when set all entities that have relations with main ones from the list are also merged into theirs tables.<br> OmitClauseExistsExcept removes the clause from Merge statement, required when having noncomparable types like XML, and useful when need to active triggers even for same data.<br> _ Also in some sql collation, small and capital letters are considered same (case-insensitive) so for BulkUpdate set it false.<br> DoNotUpdateIfTimeStampChanged if set checks TimeStamp for Concurrency, ones with conflict will not be updated.<br> SRID Spatial Reference Identifier - for SQL Server with NetTopologySuite.<br> DateTime2PrecisionForceRound If dbtype datetime2 has precision less then default 7, example 'datetime2(3)' SqlBulkCopy does Floor instead of Round so when this Property is set then Rounding will be done in memory to make sure inserted values are same as with regular SaveChanges.<br> TemporalColumns are shadow columns used for Temporal table. Default elements 'PeriodStart' and 'PeriodEnd' can be changed if those columns have custom names.<br> OnSaveChangesSetFK is used only for BulkSaveChanges. When multiply entries have FK relationship which is Db generated, this set proper value after reading parent PK from Db. IF PK are generated in memory like are some Guid then this can be set to false for better efficiency.<br> ReplaceReadEntities when set to True result of BulkRead operation will be provided using replace instead of update. Entities list parameter of BulkRead method will be repopulated with obtained data.

SqlBulkCopyOptions is Enum (only for SqlServer) with [Flags] attribute which enables specifying one or more options:<br> Default, KeepIdentity, CheckConstraints, TableLock, KeepNulls, FireTriggers, UseInternalTransaction<br> If need to set Identity PK in memory, Not let DB do the autoincrement, then need to use KeepIdentity:<br> var bulkConfig = new BulkConfig { SqlBulkCopyOptions = SqlBulkCopyOptions.KeepIdentity };<br> Useful for example when copying from one Db to another.

OnConflictUpdateWhereSql<T> To define conditional updates on merges, receives (existingTable, insertedTable).<br> --Example: bc.OnConflictUpdateWhereSql = (ex, in) => $"{in}.TimeUpdated > {ex}.TimeUpdated";<br> SetSynchronizeFilter<T> A method that receives and sets expresion filter on entities to delete when using BulkInsertOrUpdateOrDelete. Those that are filterd out will be ignored and not deleted.<br> SetSynchronizeSoftDelete<T> A method that receives and sets expresion on entities to update property instead od deleting when using BulkInsertOrUpdateOrDelete.<br> bulkConfig.SetSynchronizeSoftDelete<SomeObject>(a => new SomeObject { IsDeleted = true });<br>

Last optional argument is Action progress (Example in EfOperationTest.cs RunInsert() with WriteProgress()).

context.BulkInsert(entitiesList, null, (a) => WriteProgress(a));

Library supports Global Query Filters and Value Conversions as well.</br> Additionally BatchUpdate and named Property works with EnumToString Conversion.</br> It can map OwnedTypes, also next are links with info how to achieve NestedOwnedTypes and OwnedInSeparateTable.</br> On PG when Enum is in OwnedType it needs to have Converter explicitly configured in OnModelCreating.</br>

Table splitting are somewhat specific but could be configured in way Set TableSplit.</br> With Computed and Timestamp Columns it will work in a way that they are automatically excluded from Insert. And when combined with SetOutputIdentity they will be Selected.<br> Spatial types, like Geometry, also supported and if Entity has one, clause EXIST ... EXCEPT is skipped because it's not comparable.<br> Performance for bulk ops measured with ActivitySources named: 'BulkExecute' (tags: 'operationType', 'entitiesCount')<br> Bulk Extension methods can be Overridden if required, for example to set AuditInfo.<br> If having problems with Deadlock there is useful info in issue/46.

TPH inheritance

When having TPH (Table-Per-Hierarchy) inheritance model it can be set in 2 ways.<br> First is automatically by Convention in which case Discriminator column is not directly in Entity but is Shadow Property.<br> And second is to explicitly define Discriminator property in Entity and configure it with .HasDiscriminator().<br> Important remark regarding the first case is that since we can not set directly Discriminator to certain value we need first to add list of entities to DbSet where it will be set and after that we can call Bulk operation. Note that SaveChanges are not called and we could optionally turn off TrackingChanges for performance. Example:

public class Student : Person { ... }
context.Students.AddRange(entities); // adding to Context so that Shadow property 'Discriminator' gets set
context.BulkInsert(entities);

TPT (Table-Per-Type) as of v5 is partially supported.

Product Compatible and additional computed target framework versions.
.NET net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (127)

Showing the top 5 NuGet packages that depend on EFCore.BulkExtensions:

Package Downloads
Elsa.Persistence.EntityFramework.Core

Elsa is a set of workflow libraries and tools that enable lean and mean workflowing capabilities in any .NET Core application. This package provides Entity Framework Core entities used by the various Elsa persistence EF Core providers.

CyberEye.Constant.Lib

Package chứa các constant và enum

GreatUtilities.Core

Essencial tools to agile development.

Ssg.Core

Ssg.Core Is Core of Framework fro web application

Adriva.Extensions.Analytics

Adriva Analytics Server Extensions

GitHub repositories (10)

Showing the top 5 popular GitHub repositories that depend on EFCore.BulkExtensions:

Repository Stars
cq-panda/Vue.NetCore
(已支持sqlsugar).NetCore、.Net6、Vue2、Vue3、Vite、TypeScript、Element plus+uniapp前后端分离,全自动生成代码;支持移动端(ios/android/h5/微信小程序。http://www.volcore.xyz/
Webreaper/Damselfly
Damselfly is a server-based Photograph Management app. The goal of Damselfly is to index an extremely large collection of images, and allow easy search and retrieval of those images, using metadata such as the IPTC keyword tags, as well as the folder and file names. Damselfly includes support for object/face detection.
dotnetcore/sharding-core
high performance lightweight solution for efcore sharding table and sharding database support read-write-separation .一款ef-core下高性能、轻量级针对分表分库读写分离的解决方案,具有零依赖、零学习成本、零业务代码入侵
WolvenKit/WolvenKit
Community Mod editor/creator for REDengine games.
VahidN/EFCoreSecondLevelCacheInterceptor
EF Core Second Level Cache Interceptor
Version Downloads Last updated
8.0.2 139,497 2/19/2024
8.0.1 260,748 12/14/2023
8.0.0 117,962 11/21/2023
8.0.0-rc.1.2 16,077 10/4/2023
8.0.0-rc.1 1,303 9/13/2023
8.0.0-preview.7 567 8/31/2023
7.8.1 77,761 12/14/2023
7.1.6 588,824 8/29/2023
7.1.5 186,066 7/25/2023
7.1.4 123,026 7/10/2023
7.1.3 67,053 7/3/2023
7.1.2 242,445 5/26/2023
7.1.1 87,965 5/13/2023
7.1.0 142,107 4/26/2023
7.0.4 86,725 4/19/2023
7.0.3 178,219 4/13/2023
7.0.2 3,675 4/13/2023
7.0.1 590,197 1/28/2023
7.0.0 71,108 1/22/2023
6.8.1 80,753 12/18/2023
6.7.16 144,986 8/29/2023
6.7.15 212,040 7/25/2023
6.7.14 29,940 7/10/2023
6.7.13 7,618 7/4/2023
6.7.12 101,863 5/26/2023
6.7.11 26,929 5/13/2023
6.7.1 67,345 4/26/2023
6.7.0 341,054 1/22/2023
6.6.5 354,814 1/5/2023
6.6.4 102,196 12/19/2022
6.6.3 3,782 12/19/2022
6.6.2 222,102 12/7/2022
6.6.1 2,914 12/7/2022
6.6.0 3,788 12/7/2022
6.6.0-rc.2 129 12/7/2022
6.6.0-rc.1 129 12/7/2022
6.5.6 3,204,013 8/8/2022
6.5.5 369,848 7/21/2022
6.5.4 218,192 7/12/2022
6.5.3 63,249 7/7/2022
6.5.2 247,646 6/20/2022
6.5.1 150,534 6/14/2022
6.5.0 545,184 5/10/2022
6.4.4 844,141 4/15/2022
6.4.3 6,897 4/14/2022
6.4.2 663,121 3/17/2022
6.4.1 647,562 2/21/2022
6.4.0 229,890 2/8/2022
6.3.9 142,711 2/7/2022
6.3.8 7,435 2/6/2022
6.3.7 20,119 2/3/2022
6.3.6 3,682 2/3/2022
6.3.5 3,188 2/3/2022
6.3.4 15,471 2/2/2022
6.3.3 85,522 1/31/2022
6.3.2 28,424 1/28/2022
6.3.1 96,800 1/20/2022
6.3.0 98,123 1/15/2022
6.2.9 702,848 1/14/2022
6.2.8 63,074 1/9/2022
6.2.7 7,043 1/9/2022
6.2.6 677,880 1/1/2022
6.2.5 3,129 12/30/2021
6.2.4 20,345 12/25/2021
6.2.3 125,064 12/17/2021
6.2.2 18,088 12/15/2021
6.2.1 30,482 12/13/2021
6.2.0 20,921 12/10/2021
6.1.9 16,040 12/9/2021
6.1.8 10,347 12/9/2021
6.1.7 2,860 12/9/2021
6.1.6 8,225 12/8/2021
6.1.5 7,177 12/8/2021
6.1.4 235,756 12/4/2021
6.1.3 7,750 12/3/2021
6.1.2 10,685 12/2/2021
6.1.1 62,379 11/29/2021
6.1.0 118,461 11/28/2021
6.0.9 39,595 11/26/2021
6.0.8 19,245 11/26/2021
6.0.7 36,601 11/26/2021
6.0.6 61,985 11/24/2021
6.0.5 18,986 11/24/2021
6.0.4 54,023 11/21/2021
6.0.3 49,690 11/18/2021
6.0.2 49,803 11/12/2021
6.0.1 124,919 11/10/2021
6.0.0 213,081 11/10/2021
6.0.0-rc.2 3,241 10/15/2021
6.0.0-rc.1 691 10/6/2021
5.4.2 450,719 1/14/2022
5.4.1 195,618 11/12/2021
5.4.0 794,353 9/9/2021
5.3.9 40,751 9/5/2021
5.3.8 14,982 9/2/2021
5.3.7 227,301 8/10/2021
5.3.6 7,753 8/10/2021
5.3.5 13,156 8/9/2021
5.3.4 3,997 8/9/2021
5.3.3 6,253 8/9/2021
5.3.2 11,058 8/6/2021
5.3.1 83,106 7/26/2021
5.3.0 65,927 7/19/2021
5.2.9 16,661 7/19/2021
5.2.8 93,600 7/9/2021
5.2.7 25,824 7/8/2021
5.2.6 36,231 7/5/2021
5.2.5 87,275 6/20/2021
5.2.4 13,722 6/17/2021
5.2.3 53,144 6/10/2021
5.2.2 260,320 5/19/2021
5.2.1 22,394 5/17/2021
5.2.0 27,059 5/13/2021
5.1.9 5,477 5/13/2021
5.1.8 57,935 5/9/2021
5.1.7 25,024 5/5/2021
5.1.6 8,013 5/4/2021
5.1.5 5,431 5/3/2021
5.1.4 4,163 5/2/2021
5.1.3 3,565 5/1/2021
5.1.2 49,339 4/24/2021
5.1.1 3,811 4/23/2021
5.1.0 40,105 4/20/2021
5.0.9 39,416 4/19/2021
5.0.8 13,964 4/19/2021
5.0.7 45,228 4/12/2021
5.0.6 18,016 4/8/2021
5.0.5 27,494 4/7/2021
5.0.4 76,583 4/7/2021
5.0.3 10,028 4/7/2021
5.0.2 12,702 4/4/2021
5.0.1 10,619 4/3/2021
5.0.0 21,379 4/2/2021
3.6.6 259,729 2/26/2022
3.6.5 119,569 1/14/2022
3.6.4 6,665 1/7/2022
3.6.3 395,120 8/5/2021
3.6.2 20,786 7/26/2021
3.6.1 319,645 4/7/2021
3.6.0 11,032 4/7/2021
3.5.8 113,474 3/30/2021
3.5.7 3,222 3/30/2021
3.5.6 28,760 3/29/2021
3.5.5 12,371 3/27/2021
3.5.4 4,418 3/26/2021
3.5.3 2,832 3/26/2021
3.5.2 33,114 3/25/2021
3.5.1 10,286 3/24/2021
3.5.0 7,390 3/24/2021
3.4.9 15,420 3/23/2021
3.4.8 69,903 3/22/2021
3.4.7 11,927 3/21/2021
3.4.6 6,016 3/20/2021
3.4.5 2,858 3/20/2021
3.4.4 3,272 3/19/2021
3.4.3 13,143 3/18/2021
3.4.2 10,087 3/17/2021
3.4.1 10,934 3/17/2021
3.4.0 41,227 3/15/2021
3.3.9 114,474 3/15/2021
3.3.8 9,533 3/14/2021
3.3.7 6,709 3/13/2021
3.3.6 13,728 3/13/2021
3.3.5 156,599 3/10/2021
3.3.4 34,697 3/9/2021
3.3.3 101,702 3/8/2021
3.3.2 7,494 3/7/2021
3.3.1 424,249 2/7/2021
3.3.0 17,405 2/7/2021
3.2.7 513,530 12/13/2020
3.2.6 5,924 12/12/2020
3.2.5 1,120,892 10/15/2020
3.2.4 201,110 10/6/2020
3.2.3 195,121 9/23/2020
3.2.2 21,316 9/21/2020
3.2.1 8,504 9/21/2020
3.2.0 28,783 9/17/2020
3.1.6 292,095 9/11/2020
3.1.5 932,664 7/14/2020
3.1.4 182,021 7/7/2020
3.1.3 29,079 7/3/2020
3.1.2 13,220 7/1/2020
3.1.1 1,100,399 3/25/2020
3.1.0 1,250,144 12/18/2019
3.0.5 92,718 12/10/2019
3.0.4 66,705 12/1/2019
3.0.3 31,212 11/17/2019
3.0.2 10,292 11/12/2019
3.0.1 6,513 11/11/2019
3.0.0 156,717 10/4/2019
3.0.0-rc 5,484 9/25/2019
2.6.4 877,245 11/30/2019
2.6.3 405,934 9/21/2019
2.6.2 3,547 9/21/2019
2.6.1 117,140 9/12/2019
2.6.0 240,917 8/19/2019
2.6.0-rc 5,146 7/24/2019
2.5.2 177,108 7/22/2019
2.5.1 24,258 7/14/2019
2.5.0 115,776 7/14/2019
2.4.9 127,935 7/4/2019
2.4.8 3,373 7/4/2019
2.4.7 220,230 5/28/2019
2.4.6 124,572 4/22/2019
2.4.5 65,587 4/8/2019
2.4.4 93,978 3/18/2019
2.4.3 35,802 3/5/2019
2.4.2 9,203 3/3/2019
2.4.1 337,397 3/3/2019
2.4.0 217,462 2/4/2019
2.3.9 86,873 1/31/2019
2.3.8 19,691 1/29/2019
2.3.7 98,881 1/4/2019
2.3.6 18,860 12/27/2018
2.3.5 69,561 12/10/2018
2.3.4 21,361 11/27/2018
2.3.3 3,694 11/27/2018
2.3.2 13,169 11/26/2018
2.3.1 13,629 11/25/2018
2.3.0 13,624 11/23/2018
2.2.9 11,672 11/23/2018
2.2.8 3,912 11/22/2018
2.2.7 4,126 11/22/2018
2.2.6 69,039 11/21/2018
2.2.5 8,331 11/16/2018
2.2.4 34,011 11/14/2018
2.2.3 15,674 11/11/2018
2.2.2 55,398 11/8/2018
2.2.1 7,830 11/8/2018
2.2.0 123,714 11/8/2018
2.1.9 58,126 10/28/2018
2.1.8 35,638 10/10/2018
2.1.7 135,997 7/26/2018
2.1.6 13,863 7/13/2018
2.1.5 6,542 7/12/2018
2.1.4 14,315 7/7/2018
2.1.3 11,490 6/24/2018
2.1.2 11,109 6/21/2018
2.1.1 32,457 6/14/2018
2.1.0 22,287 6/11/2018
2.0.9 9,990 6/11/2018
2.0.8 131,384 5/15/2018
2.0.7 31,822 3/28/2018
2.0.6 7,738 3/24/2018
2.0.5 18,184 2/12/2018
2.0.4 4,331 2/6/2018
2.0.3 5,051 1/30/2018
2.0.2 14,659 11/13/2017
2.0.1 12,786 9/7/2017
2.0.0 18,581 9/4/2017
2.0.0-rc 3,583 9/4/2017
1.1.0 5,935 9/4/2017
1.0.8 4,119 8/31/2017
1.0.7 5,001 8/15/2017
1.0.6 4,324 8/9/2017
1.0.5 5,097 7/11/2017
1.0.4 4,533 6/23/2017
1.0.3 4,432 5/30/2017
1.0.2 4,789 5/15/2017
1.0.1 4,191 5/12/2017
1.0.0 11,073 5/12/2017

Multiple fixes