ASCDataAccessLibrary 4.0.1
See the version list below for details.
dotnet add package ASCDataAccessLibrary --version 4.0.1
NuGet\Install-Package ASCDataAccessLibrary -Version 4.0.1
<PackageReference Include="ASCDataAccessLibrary" Version="4.0.1" />
<PackageVersion Include="ASCDataAccessLibrary" Version="4.0.1" />
<PackageReference Include="ASCDataAccessLibrary" />
paket add ASCDataAccessLibrary --version 4.0.1
#r "nuget: ASCDataAccessLibrary, 4.0.1"
#:package ASCDataAccessLibrary@4.0.1
#addin nuget:?package=ASCDataAccessLibrary&version=4.0.1
#tool nuget:?package=ASCDataAccessLibrary&version=4.0.1
ASCDataAccessLibrary v4.0 Complete Documentation
Enterprise Azure Storage Solution for .NET Applications
Version: 4.0.0
Last Updated: November 2025
Authors: O. Brown | M. Chukwuemeka
Company: Answer Sales Calls Inc.
License: MIT
Table of Contents
- Overview
- Architecture
- Installation & Setup
- Core Components
- Data Access Layer
- Entity Models
- Lambda Expression System
- Session Management
- Error Logging
- Queue Management
- Blob Storage
- Advanced Features
- Performance & Optimization
- Migration Guide
- API Reference
- Best Practices
- Troubleshooting
Overview
ASCDataAccessLibrary is a comprehensive, enterprise-grade data access library for Azure Table Storage and Azure Blob Storage. Built on the modern Azure.Data.Tables SDK, it provides a powerful abstraction layer that significantly enhances developer productivity while maintaining performance and reliability.
Version 4.0 Highlights
- Modern SDK Foundation: Built on
Azure.Data.Tables(12.0+) - Hybrid Query Engine: Intelligent server/client-side query splitting
- Fail-Safe Protection: Prevents accidental full table scans
- Expression Rewriting: Automatic optimization of lambda expressions
- Comprehensive Feature Set: Session management, error logging, queue processing, and more
Key Capabilities
| Feature | Description | Benefit |
|---|---|---|
| Lambda Queries | LINQ-style querying with automatic optimization | Natural, type-safe queries |
| Dynamic Entities | Schema-less data handling | Flexibility without predefined types |
| Batch Operations | Automatic chunking and partition grouping | Process thousands of records efficiently |
| String Chunking | Automatic handling of 32KB+ strings | No manual string management |
| Hybrid Filtering | Server/client-side query split | Best performance for complex queries |
| Session Management | File-based, persistent session state | Survives application restarts |
| Error Logging | Automatic context capture and persistence | Comprehensive error tracking |
| Queue Processing | Position-tracked resumable queues | Fault-tolerant batch processing |
Architecture
High-Level Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Application Layer │
│ (Your Code, Controllers, Services, Console Apps, etc.) │
└───────────────────────┬─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ ASCDataAccessLibrary (v4.0) │
│ │
│ ┌─────────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ DataAccess<T> │ │ SessionMgr │ │ AzureBlobs │ │
│ │ - CRUD Ops │ │ - Web/ │ │ - Upload/ │ │
│ │ - Lambda Query │ │ Desktop/ │ │ Download │ │
│ │ - Batch Ops │ │ Console │ │ - Tag Search │ │
│ └────────┬────────┘ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │ │
│ ┌────────▼──────────────────▼─────────────────────▼─────────┐ │
│ │ Expression Analysis & Filter Generation │ │
│ │ - ODataFilterVisitor (Table Storage) │ │
│ │ - BlobTagFilterVisitor (Blob Storage) │ │
│ │ - NullOrEmptyRewriter (Expression Optimization) │ │
│ │ - UnifiedFilterGenerator (OData/Blob Tag syntax) │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Entity Serialization & Adaptation │ │
│ │ - TableEntityBase (Custom entity base) │ │
│ │ - TableEntityAdapter (Bridge to Azure SDK) │ │
│ │ - DynamicEntity (Schema-less entities) │ │
│ │ - String Chunking (>32KB handling) │ │
│ │ - Decimal Preservation (String conversion) │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Specialized Components │ │
│ │ - ErrorLogData (Automatic error tracking) │ │
│ │ - QueueData<T> (Position-tracked queues) │ │
│ │ - StateList<T> (Navigable collections) │ │
│ │ - AppSessionData (Session persistence) │ │
│ └────────────────────────────────────────────────────────────┘ │
└───────────────────────┬─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Azure SDK Layer │
│ - Azure.Data.Tables (TableClient, TableServiceClient) │
│ - Azure.Storage.Blobs (BlobServiceClient, BlobContainerClient) │
└───────────────────────┬─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Azure Storage │
│ - Azure Table Storage (OData v2 protocol) │
│ - Azure Blob Storage (REST API with tag indexing) │
└─────────────────────────────────────────────────────────────────┘
Component Organization
ASCDataAccessLibrary/
├── ASCTableStorage.Data/ # Core data access
│ ├── DataAccess<T> # Main data access class
│ ├── TableOperationType # CRUD operation types
│ ├── ComparisonTypes # Query comparison operators
│ └── DBQueryItem # Custom query builder
│
├── ASCTableStorage.Models/ # Entity models
│ ├── TableEntityBase # Base entity class
│ ├── DynamicEntity # Schema-less entities
│ ├── AppSessionData # Session storage
│ ├── ErrorLogData # Error logging
│ ├── QueueData<T> # Queue management
│ ├── StateList<T> # Position-tracked lists
│ └── ITableExtra # Entity interface
│
├── ASCTableStorage.Blobs/ # Blob storage
│ ├── AzureBlobs # Blob operations
│ └── BlobData # Blob metadata model
│
├── ASCTableStorage.Sessions/ # Session management
│ ├── SessionManager # Static session manager
│ ├── Session # Session instance
│ ├── SessionOptions # Configuration
│ └── SessionIdStrategy # ID generation strategies
│
├── ASCTableStorage.Logging/ # Error logging
│ ├── AzureTableLogger # ILogger implementation
│ ├── ApplicationLogEntry # Log entry model
│ └── ErrorCodeTypes # Severity levels
│
├── ASCTableStorage.Common/ # Shared utilities
│ ├── Extensions # Extension methods
│ ├── Functions # Common functions
│ └── Constants # Library constants
│
└── LambdaHandlers/ # Expression processing
├── ExpressionAnalyzerBase # Base analyzer
├── ODataFilterVisitor # Table Storage queries
├── BlobTagFilterVisitor # Blob Tag queries
├── NullOrEmptyRewriter # Expression optimization
├── UnifiedFilterGenerator # Filter string generation
└── FilterSyntaxConfig # Backend syntax configs
Installation & Setup
NuGet Installation
# Package Manager Console
Install-Package ASCDataAccessLibrary
# .NET CLI
dotnet add package ASCDataAccessLibrary
# PackageReference
<PackageReference Include="ASCDataAccessLibrary" Version="4.0.0" />
Dependencies
- Azure.Data.Tables >= 12.0.0
- Azure.Storage.Blobs >= 12.0.0
- .NET >= 6.0
Basic Configuration
using ASCTableStorage.Data;
using ASCTableStorage.Models;
// Method 1: Direct credentials
var dataAccess = new DataAccess<Customer>("accountName", "accountKey");
// Method 2: TableOptions configuration
var options = new TableOptions
{
TableStorageName = "accountName",
TableStorageKey = "accountKey",
TableName = "CustomTableName" // Optional override
};
var dataAccess = new DataAccess<Customer>(options);
// Method 3: Connection string (for local development)
var dataAccess = new DataAccess<Customer>(
"UseDevelopmentStorage=true", // Azurite/Emulator
""
);
Environment-Specific Setup
ASP.NET Core Web Application
// Program.cs
using ASCTableStorage.Sessions;
using ASCTableStorage.Logging;
var builder = WebApplication.CreateBuilder(args);
// Configure Session Management
builder.Services.AddHttpContextAccessor();
builder.Services.AddSingleton(new SessionOptions
{
AccountName = builder.Configuration["Azure:AccountName"],
AccountKey = builder.Configuration["Azure:AccountKey"],
EnableAutoCleanup = true,
CleanupInterval = TimeSpan.FromHours(1),
SessionTimeout = TimeSpan.FromHours(24),
AutoCommitInterval = TimeSpan.FromMinutes(5)
});
builder.Services.AddHostedService<SessionManagerInitializerService>();
// Configure Azure Table Logging
builder.Logging.ConfigureAzureTableLogging(
builder.Configuration["Azure:AccountName"]!,
builder.Configuration["Azure:AccountKey"]!,
options =>
{
options.MinimumLevel = LogLevel.Information;
options.BatchSize = 100;
options.FlushInterval = TimeSpan.FromSeconds(5);
}
);
var app = builder.Build();
app.Run();
Desktop/WPF Application
// App.xaml.cs
using ASCTableStorage.Sessions;
using Microsoft.Extensions.Logging;
public partial class App : Application
{
protected override void OnStartup(StartupEventArgs e)
{
base.OnStartup(e);
// Initialize Session Manager
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.IdStrategy = SessionIdStrategy.MachineAndUser;
options.EnableAutoCleanup = true;
options.AutoCommitInterval = TimeSpan.FromMinutes(10);
});
// Configure logging
var loggerFactory = LoggerFactory.Create(builder =>
{
builder.ConfigureAzureTableLogging("accountName", "accountKey", options =>
{
options.MinimumLevel = LogLevel.Debug;
options.ApplicationName = "MyDesktopApp";
});
});
RemoteLogger.Initialize(loggerFactory);
}
}
Console/Service Application
using ASCTableStorage.Sessions;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
var host = Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ConfigureAzureTableLogging("accountName", "accountKey", options =>
{
options.MinimumLevel = LogLevel.Information;
options.ApplicationName = "MyService";
});
})
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
})
.Build();
// Initialize sessions for console apps
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.IdStrategy = SessionIdStrategy.ProcessId;
options.EnableAutoCleanup = false; // Manual control in services
});
await host.RunAsync();
Core Components
TableEntityBase
The foundation class for all table entities, providing serialization, deserialization, and advanced features.
namespace ASCTableStorage.Models
{
public abstract class TableEntityBase : ITableEntity
{
public string? PartitionKey { get; set; }
public string? RowKey { get; set; }
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; }
// Automatic serialization with:
// - String chunking for values > 32KB
// - Decimal preservation (stored as strings)
// - Enum handling
// - DateTime UTC conversion
// - Complex object JSON serialization
}
}
Features:
- Automatic property serialization to Azure-compatible format
- String chunking:
LargeProperty→LargeProperty,LargeProperty_pt_1,LargeProperty_pt_2... - Decimal precision: Converts to string to avoid double precision loss
- Type safety: Reflection-based with caching for performance
ITableExtra Interface
Required interface for entities to specify table name and ID retrieval.
public interface ITableExtra
{
string TableReference { get; } // Table name in Azure
string GetIDValue(); // Entity unique identifier
}
Implementation Example:
public class Customer : TableEntityBase, ITableExtra
{
public string? CustomerName { get; set; }
public string? Email { get; set; }
public decimal AccountBalance { get; set; }
public string? LargeNotes { get; set; } // Can be > 32KB
public CustomerStatus Status { get; set; }
public string TableReference => "Customers";
public string GetIDValue() => this.RowKey!;
}
public enum CustomerStatus
{
Active,
Inactive,
Suspended
}
Data Access Layer
DataAccess<T> Class
The primary interface for all CRUD operations, queries, and batch processing.
Constructors
// Constructor 1: Account credentials
public DataAccess(string accountName, string accountKey)
// Constructor 2: TableOptions
public DataAccess(TableOptions options)
CRUD Operations
Create/Update
// Insert or replace (default)
await dataAccess.ManageDataAsync(entity);
await dataAccess.ManageDataAsync(entity, TableOperationType.InsertOrReplace);
// Insert or merge (update only provided properties)
await dataAccess.ManageDataAsync(entity, TableOperationType.InsertOrMerge);
// Synchronous version
dataAccess.ManageData(entity);
Read - Single Entity
// By RowKey
var customer = await dataAccess.GetRowObjectAsync("customer-123");
// By lambda expression
var customer = await dataAccess.GetRowObjectAsync(x =>
x.Email == "john@example.com"
);
// By custom criteria
var customer = await dataAccess.GetRowObjectAsync(
"Email",
ComparisonTypes.eq,
"john@example.com"
);
Read - Collection
// All records in partition
var customers = await dataAccess.GetCollectionAsync("CUST");
// Lambda expression
var activeCustomers = await dataAccess.GetCollectionAsync(x =>
x.Status == CustomerStatus.Active && x.AccountBalance > 1000
);
// Custom query items
var queryItems = new List<DBQueryItem>
{
new() { FieldName = "Status", FieldValue = "Active", HowToCompare = ComparisonTypes.eq },
new() { FieldName = "Priority", FieldValue = "5", HowToCompare = ComparisonTypes.ge }
};
var customers = await dataAccess.GetCollectionAsync(queryItems, QueryCombineStyle.and);
// OData filter string
var customers = await dataAccess.GetCollectionByFilterAsync(
"Status eq 'Active' and Priority ge 5"
);
// All table data (use with caution!)
var allCustomers = await dataAccess.GetAllTableDataAsync();
Delete
await dataAccess.ManageDataAsync(entity, TableOperationType.Delete);
Pagination
// Basic pagination
string continuationToken = null;
int pageSize = 100;
do
{
var result = await dataAccess.GetPagedCollectionAsync(
pageSize,
continuationToken
);
foreach (var customer in result.Items)
{
// Process customer
}
continuationToken = result.ContinuationToken;
} while (!string.IsNullOrEmpty(continuationToken));
// Paginated lambda query
var result = await dataAccess.GetPagedCollectionAsync(
x => x.Status == CustomerStatus.Active,
pageSize: 50,
continuationToken: token
);
// Initial data load pattern
var initialLoad = await dataAccess.GetInitialDataLoadAsync(
initialLoadSize: 25
);
Batch Operations
// Batch insert/update with progress tracking
var customers = GetLargeCustomerList(); // 500 items
var progress = new Progress<BatchUpdateProgress>(p =>
{
Console.WriteLine($"Processed: {p.ProcessedItems}/{p.TotalItems} " +
$"({p.PercentComplete:F1}%) - " +
$"Success: {p.SuccessfulItems}, Failed: {p.FailedItems}");
});
var result = await dataAccess.BatchUpdateListAsync(
customers,
TableOperationType.InsertOrReplace,
progress
);
Console.WriteLine($"Batch Complete:");
Console.WriteLine($" Total Items: {result.TotalItems}");
Console.WriteLine($" Successful: {result.SuccessfulItems}");
Console.WriteLine($" Failed: {result.FailedItems}");
Console.WriteLine($" Success: {result.Success}");
// Batch delete
await dataAccess.BatchUpdateListAsync(
customersToDelete,
TableOperationType.Delete
);
// Batch with DynamicEntity
var dynamicEntities = new List<DynamicEntity>();
// ... populate list
await dataAccess.BatchUpdateListAsync(dynamicEntities);
Batch Operation Features:
- Automatic partition key grouping (Azure requirement)
- Automatic chunking (100 items per batch max)
- Progress tracking with
IProgress<BatchUpdateProgress> - Error handling with detailed results
- Supports both
TableEntityBaseandDynamicEntity
Entity Models
DynamicEntity
Schema-less entity handling without predefined types.
Basic Usage
// Create dynamic entity
var entity = new DynamicEntity("Products");
entity["ProductName"] = "Laptop";
entity["Price"] = 999.99;
entity["InStock"] = true;
entity["Tags"] = new[] { "Electronics", "Computers" };
entity["Specifications"] = new { CPU = "Intel i7", RAM = "16GB" };
// Save
var dataAccess = new DataAccess<DynamicEntity>("accountName", "accountKey");
await dataAccess.ManageDataAsync(entity);
// Retrieve and access
var retrieved = await dataAccess.GetRowObjectAsync(entity.RowKey);
string name = retrieved.GetProperty<string>("ProductName");
double price = retrieved.GetProperty<double>("Price");
bool inStock = retrieved.GetProperty<bool>("InStock");
Pattern-Based Key Detection
var config = new DynamicEntity.KeyPatternConfig
{
PartitionKeyPatterns = new List<DynamicEntity.PatternRule>
{
new()
{
Pattern = @".*customer.*id.*",
Priority = 100,
KeyType = DynamicEntity.KeyGenerationType.DirectValue
},
new()
{
Pattern = @".*department.*",
Priority = 80,
KeyType = DynamicEntity.KeyGenerationType.DirectValue
}
},
RowKeyPatterns = new List<DynamicEntity.PatternRule>
{
new()
{
Pattern = @".*(order|invoice).*id.*",
Priority = 100,
KeyType = DynamicEntity.KeyGenerationType.DirectValue
}
},
FuzzyMatchThreshold = 0.5 // 50% similarity required
};
var properties = new Dictionary<string, object>
{
["CustomerId"] = "CUST-789",
["OrderId"] = "ORD-12345",
["Department"] = "Sales",
["Amount"] = 1500.00
};
var entity = DynamicEntity.CreateFromDictionary("Orders", properties, config);
// Automatically sets:
// - PartitionKey = "CUST-789" (from CustomerId)
// - RowKey = "ORD-12345" (from OrderId)
Key Generation Types
public enum KeyGenerationType
{
DirectValue, // Use value as-is
Generated, // Generate GUID
DateBased, // "2024-01" format
Sequential, // Timestamp + value
ReverseTimestamp, // Reverse chronological
Composite // Combine multiple fields
}
Factory Methods
// From dictionary
var entity = DynamicEntity.CreateFromDictionary(
"TableName",
properties,
patternConfig
);
// From anonymous object
var entity = DynamicEntity.CreateFromObject(
"TableName",
new { Id = "123", Name = "Test" },
patternConfig
);
Lambda Expression System
Expression Analysis Architecture
The library uses a sophisticated expression analysis system to intelligently split lambda expressions between server-side (Azure) and client-side (in-memory) processing.
Expression Visitor Pipeline
Lambda Expression
↓
┌──────────────────────┐
│ Constant Evaluator │ → Evaluates constant expressions
└──────────┬───────────┘
↓
┌──────────────────────┐
│ NullOrEmptyRewriter │ → Rewrites patterns like (x == null || x == "")
└──────────┬───────────┘
↓
┌──────────────────────┐
│ ExpressionAnalyzer │ → Splits into server/client parts
│ - ODataFilterVisitor │ (for Table Storage)
│ - BlobTagVisitor │ (for Blob Storage)
└──────────┬───────────┘
↓
┌──────────────────────┐
│ UnifiedFilter │ → Generates backend-specific filter string
│ Generator │ (OData or Blob Tag syntax)
└──────────┬───────────┘
↓
Filter String
(sent to Azure)
Supported Operations
Server-Side Operations (OData for Table Storage)
// Comparison operators
x => x.Priority == 5 // eq
x => x.Priority != 5 // ne
x => x.Priority > 5 // gt
x => x.Priority >= 5 // ge
x => x.Priority < 5 // lt
x => x.Priority <= 5 // le
// Logical operators
x => x.Priority > 5 && x.Status == CustomerStatus.Active // and
x => x.Priority > 5 || x.Status == CustomerStatus.Inactive // or
x => !(x.Status == CustomerStatus.Suspended) // not
// String operations
x => string.IsNullOrEmpty(x.Email) // eq ''
x => string.IsNullOrWhiteSpace(x.Notes) // eq ''
// DateTime properties
x => x.CreatedDate.Year == 2024 // year(CreatedDate) eq 2024
x => x.CreatedDate.Month == 12 // month(CreatedDate) eq 12
x => x.CreatedDate.Day > 15 // day(CreatedDate) gt 15
x => x.CreatedDate.Hour >= 9 // hour(CreatedDate) ge 9
// Collection operations
var statuses = new[] { "Active", "Pending" };
x => statuses.Contains(x.Status)
// Generates: (Status eq 'Active' or Status eq 'Pending')
Client-Side Operations (Automatic Fallback)
// String methods
x => x.Email.Contains("@example.com")
x => x.Name.ToLower().StartsWith("john")
x => x.Description.ToUpper().EndsWith("URGENT")
// Complex expressions
x => x.Tags.Any(t => t.StartsWith("Priority"))
x => x.OrderItems.Sum(i => i.Quantity) > 10
x => CalculateDiscount(x.AccountBalance) > 100
// Method chains
x => x.Email.Trim().ToLower().Contains("admin")
// Custom logic
x => MyCustomMethod(x)
Hybrid Query Example
var customers = await dataAccess.GetCollectionAsync(x =>
x.Status == CustomerStatus.Active && // Server-side
x.CreatedDate.Year == 2024 && // Server-side
x.Priority >= 5 && // Server-side
x.Email.ToLower().Contains("@example.com") // Client-side
);
// Generated OData filter (sent to Azure):
// (Status eq 'Active' and year(CreatedDate) eq 2024 and Priority ge 5)
//
// Client-side filter (applied in memory):
// x.Email.ToLower().Contains("@example.com")
Expression Rewriting
The library automatically rewrites certain patterns for better compatibility:
// BEFORE rewriting:
x => (x.Email == null || x.Email == "")
// AFTER rewriting:
x => string.IsNullOrEmpty(x.Email)
// Generated OData:
Email eq ''
This rewriting is critical because:
- Azure Table Storage doesn't support
eq nullin OData IsNullOrEmptycan be translated toeq ''which is supported- Prevents queries from failing or forcing full client-side processing
Fail-Safe Protection
try
{
// This expression cannot be translated to OData
var results = await dataAccess.GetCollectionAsync(x =>
x.CustomProperty.MyUnsupportedMethod()
);
}
catch (InvalidOperationException ex)
{
// "The provided lambda expression could not be translated into
// a valid server-side query. To prevent returning the entire
// table, this operation has been aborted."
}
Protection prevents:
- Accidentally fetching entire tables (expensive!)
- Unexpectedly large result sets
- Performance degradation
- Excessive Azure costs
Blob Tag Queries
Similar lambda support for Azure Blob Storage tag indexing:
var blobs = await blobService.GetCollectionAsync(x =>
x.Tags["Year"] == "2024" && // Server-side
x.Tags["Department"] == "Finance" && // Server-side
x.Tags["Status"] == "Approved" && // Server-side
x.Size > 1024 * 1024 // Client-side
);
// Generated Blob Tag filter:
// ("Year" = '2024' AND "Department" = 'Finance' AND "Status" = 'Approved')
//
// Client-side filter:
// x.Size > 1048576
Session Management
Overview
ASCDataAccessLibrary provides a comprehensive session management system that works across web, desktop, and console applications with file-based persistence.
Architecture
┌──────────────────────────────────────────────────────────────┐
│ Application Layer │
│ SessionManager.GetValue<T>() / SetValue() │
└─────────────────────────────┬────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────┐
│ Session Instance │
│ - In-memory cache of session data │
│ - Dirty tracking for changes │
│ - Auto-commit timer (optional) │
└─────────────────────────────┬────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────┐
│ Azure Table Storage │
│ Table: AppSessionData │
│ PartitionKey: SessionID │
│ RowKey: Key name │
│ Value: Base64 JSON encoded data │
└──────────────────────────────────────────────────────────────┘
Session ID Strategies
public enum SessionIdStrategy
{
HttpContext, // Web: Uses HttpContext.Session.Id
MachineAndUser, // Desktop: MachineName + UserName
ProcessId, // Console: Current Process ID
Custom // User-provided strategy
}
Configuration
// Web Application
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.IdStrategy = SessionIdStrategy.HttpContext;
options.ContextAccessor = httpContextAccessor; // From DI
options.EnableAutoCleanup = true;
options.CleanupInterval = TimeSpan.FromHours(1);
options.SessionTimeout = TimeSpan.FromHours(24);
options.AutoCommitInterval = TimeSpan.FromMinutes(5);
});
// Desktop Application
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.IdStrategy = SessionIdStrategy.MachineAndUser;
options.EnableAutoCleanup = true;
options.AutoCommitInterval = TimeSpan.FromMinutes(10);
});
// Console/Service Application
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.IdStrategy = SessionIdStrategy.ProcessId;
options.EnableAutoCleanup = false; // Manual control
options.SessionId = "my-service-session"; // Explicit ID
});
// Custom ID Provider
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.IdStrategy = SessionIdStrategy.Custom;
options.CustomIdProvider = () => $"CustomID-{Guid.NewGuid()}";
});
Usage
// Store values
SessionManager.SetValue("UserPreferences", new
{
Theme = "Dark",
Language = "en-US",
NotificationsEnabled = true
});
SessionManager.SetValue<int>("LoginAttempts", 3);
SessionManager.SetValue("LastLoginTime", DateTime.UtcNow);
// Retrieve values
var preferences = SessionManager.GetValue<dynamic>("UserPreferences");
int attempts = SessionManager.GetValue<int>("LoginAttempts");
DateTime lastLogin = SessionManager.GetValue<DateTime>("LastLoginTime");
// Check existence
if (SessionManager.ContainsKey("UserPreferences"))
{
// ...
}
// Remove specific value
SessionManager.ClearValue("LoginAttempts");
// Clear all session data
await SessionManager.ClearSessionAsync();
// Manual commit (if not using auto-commit)
await SessionManager.CommitSessionAsync();
// Force refresh from storage
await SessionManager.RefreshSessionAsync();
Auto-Commit Behavior
When AutoCommitInterval is set, session data is automatically persisted to Azure Table Storage at the specified interval:
// Changes are tracked in-memory
SessionManager.SetValue("CartCount", 5);
SessionManager.SetValue("LastProduct", "PRD-123");
// ... 5 minutes later (if AutoCommitInterval = 5 minutes)
// Automatically commits to Azure Table Storage
// No need for manual CommitSessionAsync() calls
Cleanup Background Service
// Automatic cleanup of old sessions
options.EnableAutoCleanup = true;
options.CleanupInterval = TimeSpan.FromHours(1);
options.SessionTimeout = TimeSpan.FromHours(24);
// Runs in background, removes sessions older than SessionTimeout
Session Data Model
public class AppSessionData : TableEntityBase, ITableExtra
{
public string? SessionID { get; set; } // PartitionKey
public string? Key { get; set; } // RowKey
public object? Value { get; set; } // Base64 JSON encoded
public string TableReference => "AppSessionData";
public string GetIDValue() => this.RowKey!;
}
Error Logging
Overview
Comprehensive error logging with automatic context capture, stack trace preservation, and Azure Table Storage persistence.
ErrorLogData Model
public class ErrorLogData : TableEntityBase, ITableExtra
{
public string? ApplicationName { get; set; }
public string? ErrorSeverity { get; set; }
public string? ErrorMessage { get; set; }
public string? FunctionCalled { get; set; }
public string? StackTrace { get; set; }
public string? InnerException { get; set; }
public string? CustomerID { get; set; }
public DateTime ErrorDate { get; set; }
public string TableReference => "ApplicationErrorLog";
public string GetIDValue() => this.RowKey!;
}
public enum ErrorCodeTypes
{
Information,
Warning,
Error,
Critical
}
Usage Patterns
Pattern 1: With Exception
try
{
// Your code
ProcessPayment(order);
}
catch (Exception ex)
{
var errorLog = new ErrorLogData(
ex,
"Failed to process payment for order",
ErrorCodeTypes.Error,
customerID: order.CustomerId
);
await errorLog.LogErrorAsync("accountName", "accountKey");
// Or synchronous
errorLog.LogError("accountName", "accountKey");
}
Pattern 2: With Caller Info
var errorLog = ErrorLogData.CreateWithCallerInfo(
"Payment gateway returned error code 500",
ErrorCodeTypes.Critical,
customerID: "CUST-789"
);
await errorLog.LogErrorAsync("accountName", "accountKey");
The CreateWithCallerInfo method automatically captures:
- Calling method name
- Source file path
- Line number
- Application name
- Stack trace
Pattern 3: With Custom State and Formatter
try
{
ProcessOrder(order);
}
catch (Exception ex)
{
var errorLog = ErrorLogData.CreateWithCallerInfo(
state: new
{
OrderId = order.Id,
Amount = order.Total,
CustomerId = order.CustomerId,
PaymentMethod = order.PaymentMethod
},
exception: ex,
formatter: (state, ex) =>
$"Order {state.OrderId} for customer {state.CustomerId} " +
$"failed with amount {state.Amount:C}. Error: {ex?.Message}",
severity: ErrorCodeTypes.Error
);
await errorLog.LogErrorAsync("accountName", "accountKey");
}
Automatic Context Capture
// Automatically captures:
// - Application name (from entry assembly)
// - Function name (from stack trace)
// - Full stack trace
// - Inner exception details
// - Timestamp
// - Customer ID (if provided)
Cleanup Operations
// Clean logs older than 60 days
await ErrorLogData.ClearOldDataAsync(
"accountName",
"accountKey",
daysOld: 60
);
// Clean specific error types
await ErrorLogData.ClearOldDataByType(
"accountName",
"accountKey",
ErrorCodeTypes.Information,
daysOld: 30
);
// Clean by custom predicate
var dataAccess = new DataAccess<ErrorLogData>("accountName", "accountKey");
var oldWarnings = await dataAccess.GetCollectionAsync(x =>
x.ErrorSeverity == ErrorCodeTypes.Warning.ToString() &&
x.ErrorDate < DateTime.UtcNow.AddDays(-7)
);
await dataAccess.BatchUpdateListAsync(oldWarnings, TableOperationType.Delete);
Integration with ILogger
using ASCTableStorage.Logging;
// Configure in Program.cs
builder.Logging.ConfigureAzureTableLogging(
"accountName",
"accountKey",
options =>
{
options.MinimumLevel = LogLevel.Information;
options.BatchSize = 100;
options.FlushInterval = TimeSpan.FromSeconds(2);
options.ApplicationName = "MyApp";
options.IncludeScopes = true;
}
);
// Use ILogger as normal
public class MyService
{
private readonly ILogger<MyService> _logger;
public MyService(ILogger<MyService> logger)
{
_logger = logger;
}
public void ProcessData()
{
try
{
_logger.LogInformation("Starting data processing");
// ... process data
_logger.LogInformation("Data processing complete");
}
catch (Exception ex)
{
_logger.LogError(ex, "Data processing failed");
}
}
}
// Logs are automatically written to Azure Table Storage
Queue Management
Overview
Persistent, position-tracked queue system using StateList<T> and QueueData<T> for resumable batch processing.
Architecture
Application
↓
QueueData<T>
├── Name (PartitionKey): Category/group of queues
├── QueueID (RowKey): Unique queue identifier
├── Data: StateList<T> with position tracking
├── ProcessingStatus: "Not Started", "In Progress", "Completed"
└── PercentComplete: Progress percentage
↓
StateList<T>
├── Items: List<T> of data to process
├── CurrentIndex: Current position in list
├── HasNext/HasPrevious: Navigation properties
└── Navigation methods: First(), Last(), MoveNext(), MovePrevious()
↓
Azure Table Storage
StateList<T> Features
var list = new StateList<string>
{
Description = "My Processing List"
};
// Add items
list.Add("Item 1");
list.Add("Item 2");
list.Add("Item 3");
list.AddRange(new[] { "Item 4", "Item 5" });
// Navigation
list.First(); // Move to first, returns true if exists
Console.WriteLine(list.Current); // "Item 1"
list.MoveNext(); // Move forward
Console.WriteLine(list.Current); // "Item 2"
list.Last(); // Move to last
list.MovePrevious(); // Move backward
// Properties
bool hasNext = list.HasNext;
bool hasPrev = list.HasPrevious;
int currentPos = list.CurrentIndex; // -1 if not started
// Peek at current
var (data, index) = list.Peek;
// Search by string (case-insensitive)
var item = list["Item 2"];
// LINQ-style operations
list.Sort();
list.RemoveAll(x => x.StartsWith("Old"));
var filtered = list.Where(x => x.Contains("Important")).ToList();
// Add with position control
list.AddRange(newItems, setCurrentToFirst: true);
QueueData<T> Usage
Creating Queues
// From StateList
var orderList = new StateList<Order>();
orderList.AddRange(pendingOrders);
var queue = QueueData<Order>.CreateFromStateList(
orderList,
name: "OrderProcessing",
queueId: "queue-2024-01-15"
);
await queue.SaveQueueAsync("accountName", "accountKey");
// From List
var queue = QueueData<Order>.CreateFromList(
pendingOrders,
name: "OrderProcessing",
queueId: "batch-001"
);
await queue.SaveQueueAsync("accountName", "accountKey");
Processing Queues
// Retrieve queue
var queue = await QueueData<Order>.GetQueueAsync(
"batch-001",
"accountName",
"accountKey"
);
Console.WriteLine($"Queue Status: {queue.ProcessingStatus}");
Console.WriteLine($"Progress: {queue.PercentComplete:F1}%");
Console.WriteLine($"Position: {queue.LastProcessedIndex + 1}/{queue.TotalItemCount}");
// Process items with automatic position tracking
while (queue.Data.MoveNext())
{
var currentOrder = queue.Data.Current;
var currentIndex = queue.Data.CurrentIndex;
try
{
Console.WriteLine($"Processing order {currentIndex + 1}/{queue.TotalItemCount}");
// Process the order
await ProcessOrder(currentOrder);
// Save progress after each item (survives crashes)
await queue.SaveQueueAsync("accountName", "accountKey");
}
catch (Exception ex)
{
Console.WriteLine($"Failed to process order at index {currentIndex}: {ex.Message}");
// Save current position before aborting
await queue.SaveQueueAsync("accountName", "accountKey");
// Can resume from this position later
break;
}
}
if (queue.ProcessingStatus == "Completed")
{
Console.WriteLine("Queue processing complete!");
}
Batch Operations
// Get all queues in a category
var queues = await QueueData<Order>.GetQueuesAsync(
"OrderProcessing",
"accountName",
"accountKey"
);
Console.WriteLine($"Found {queues.Count} queues:");
foreach (var q in queues)
{
Console.WriteLine($" - {q.QueueID}: {q.ProcessingStatus} ({q.PercentComplete:F1}%)");
}
// Delete specific queues
var deletedCount = await QueueData<Order>.DeleteQueuesAsync(
new List<string> { "queue-001", "queue-002", "queue-003" },
"accountName",
"accountKey"
);
Console.WriteLine($"Deleted {deletedCount} queues");
// Delete queues matching condition
var deletedCount = await QueueData<Order>.DeleteQueuesMatchingAsync(
"accountName",
"accountKey",
q => q.PercentComplete == 100 &&
q.Timestamp < DateTime.UtcNow.AddDays(-7)
);
Console.WriteLine($"Cleaned up {deletedCount} completed queues");
// Delete and retrieve data (for migration/archival)
var dataLists = await QueueData<Order>.DeleteAndReturnAllAsync(
"OrderProcessing",
"accountName",
"accountKey"
);
// dataLists contains all StateList<Order> data from deleted queues
Resumable Processing Pattern
public async Task ProcessOrderQueue(string queueId)
{
var queue = await QueueData<Order>.GetQueueAsync(
queueId,
_accountName,
_accountKey
);
// Resume from last position if queue was interrupted
if (queue.Data.CurrentIndex < 0)
{
queue.Data.First(); // Start from beginning
}
while (queue.Data.HasNext)
{
queue.Data.MoveNext();
var order = queue.Data.Current;
try
{
await ProcessOrder(order);
// Checkpoint progress every N items
if (queue.Data.CurrentIndex % 10 == 0)
{
await queue.SaveQueueAsync(_accountName, _accountKey);
}
}
catch (Exception ex)
{
_logger.LogError(ex, $"Failed at order {queue.Data.CurrentIndex}");
// Save position and abort
await queue.SaveQueueAsync(_accountName, _accountKey);
throw;
}
}
// Final save
await queue.SaveQueueAsync(_accountName, _accountKey);
}
Queue Properties
public class QueueData<T>
{
public string? QueueID { get; set; } // RowKey
public string? Name { get; set; } // PartitionKey (category)
public StateList<T> Data { get; set; } // The actual queue data
// Computed properties
public string ProcessingStatus { get; } // "Empty", "Not Started", "In Progress", "Completed"
public double PercentComplete { get; } // 0-100
public int TotalItemCount { get; } // Total items in queue
public int LastProcessedIndex { get; } // Current position (-1 if not started)
}
Blob Storage
Overview
Azure Blob Storage integration with lambda expression support for tag-based querying.
AzureBlobs Class
public class AzureBlobs
{
public AzureBlobs(
string accountName,
string accountKey,
string containerName,
long defaultMaxFileSizeBytes = 5 * 1024 * 1024 // 5MB default
)
public string ContainerName { get; }
}
Upload Operations
var blobService = new AzureBlobs(
"accountName",
"accountKey",
"documents"
);
// Upload with tags and metadata
var uploadResult = await blobService.UploadAsync(
fileStream,
"invoice-2024.pdf",
"application/pdf",
tags: new Dictionary<string, string>
{
["Year"] = "2024",
["Department"] = "Finance",
["Status"] = "Approved",
["Priority"] = "High",
["Quarter"] = "Q1"
},
metadata: new Dictionary<string, string>
{
["UploadedBy"] = "john@example.com",
["InvoiceNumber"] = "INV-2024-001",
["ProjectCode"] = "PROJ-789"
}
);
// Upload from file path
await blobService.UploadAsync(
@"C:\Documents\report.pdf",
"report-2024.pdf",
tags: tags,
metadata: metadata
);
// Upload byte array
byte[] data = File.ReadAllBytes("document.pdf");
await blobService.UploadAsync(
data,
"document.pdf",
"application/pdf",
tags: tags
);
Query Operations
// Lambda expression query on blob tags
var blobs = await blobService.GetCollectionAsync(x =>
x.Tags["Year"] == "2024" &&
x.Tags["Department"] == "Finance" &&
x.Tags["Status"] == "Approved"
);
foreach (var blob in blobs)
{
Console.WriteLine($"{blob.Name} - {blob.Size} bytes - {blob.UploadDate}");
Console.WriteLine($" Tags: {string.Join(", ", blob.Tags.Select(t => $"{t.Key}={t.Value}"))}");
}
// Complex queries with client-side filtering
var largeFinanceBlobs = await blobService.GetCollectionAsync(x =>
x.Tags["Department"] == "Finance" && // Server-side
x.Size > 1024 * 1024 // Client-side
);
// Async enumerable for streaming results
await foreach (var blob in blobService.GetCollectionAsyncEnumerable(
x => x.Tags["Year"] == "2024",
prefix: "invoices/",
loadContent: false
))
{
Console.WriteLine($"Processing {blob.Name}...");
// Process immediately without loading all results into memory
}
Download Operations
// Download to stream
using var stream = new MemoryStream();
await blobService.DownloadAsync("document.pdf", stream);
// Download to byte array
byte[] content = await blobService.DownloadAsync("document.pdf");
// Download to file
await blobService.DownloadAsync("document.pdf", @"C:\Downloads\document.pdf");
// Get blob metadata without downloading content
var blobData = blobs.First();
Console.WriteLine($"Name: {blobData.Name}");
Console.WriteLine($"Size: {blobData.Size} bytes");
Console.WriteLine($"Content Type: {blobData.ContentType}");
Console.WriteLine($"Upload Date: {blobData.UploadDate}");
foreach (var tag in blobData.Tags)
{
Console.WriteLine($" Tag: {tag.Key} = {tag.Value}");
}
Tag Management
// Update blob tags
await blobService.UpdateTagsAsync(
"invoice-2024.pdf",
new Dictionary<string, string>
{
["Status"] = "Paid",
["PaymentDate"] = DateTime.UtcNow.ToString("yyyy-MM-dd"),
["PaymentMethod"] = "CreditCard"
}
);
// Get blob tags
var tags = await blobService.GetTagsAsync("invoice-2024.pdf");
foreach (var tag in tags)
{
Console.WriteLine($"{tag.Key}: {tag.Value}");
}
List Operations
// List all blobs in container
var allBlobs = await blobService.ListBlobsAsync();
// List with prefix filter
var invoices = await blobService.ListBlobsAsync(prefix: "invoices/2024/");
// List with content loaded
var blobsWithContent = await blobService.ListBlobsAsync(
prefix: "reports/",
loadContent: true
);
// Async enumerable listing
await foreach (var blob in blobService.ListBlobsAsyncEnumerable(
prefix: "large-files/",
loadContent: false,
cancellationToken: cancellationToken
))
{
Console.WriteLine($"{blob.Name}: {blob.Size} bytes");
}
Delete Operations
// Delete single blob
await blobService.DeleteAsync("old-document.pdf");
// Delete multiple blobs
await blobService.DeleteAsync(new[] { "doc1.pdf", "doc2.pdf", "doc3.pdf" });
BlobData Model
public class BlobData
{
public string Name { get; set; } // Blob name
public string? OriginalFilename { get; set; } // Original filename from metadata
public string? ContentType { get; set; } // MIME type
public long Size { get; set; } // Size in bytes
public DateTime UploadDate { get; set; } // Upload timestamp
public byte[]? Content { get; set; } // Blob content (if loaded)
public Dictionary<string, string> Tags { get; set; } // Blob tags (up to 10)
public Dictionary<string, string> Metadata { get; set; } // Blob metadata
}
Blob Tag Indexing Limits
Azure Blob Storage tag indexing has the following limits:
- Maximum 10 tags per blob
- Tag keys: 1-128 characters
- Tag values: 0-256 characters
- Case-sensitive
- Only string values supported
Advanced Features
String Chunking (>32KB Properties)
Azure Table Storage has a 32KB limit per property. ASCDataAccessLibrary automatically chunks larger strings.
public class Document : TableEntityBase, ITableExtra
{
public string? Title { get; set; }
public string? Content { get; set; } // Can exceed 32KB
public string TableReference => "Documents";
public string GetIDValue() => this.RowKey!;
}
var doc = new Document
{
PartitionKey = "DOC",
RowKey = Guid.NewGuid().ToString(),
Title = "Large Document",
Content = veryLongString // 200KB string
};
// Automatically chunked into Azure properties:
// - Content (first 32KB)
// - Content_pt_1 (next 32KB)
// - Content_pt_2 (next 32KB)
// - ... etc
await dataAccess.ManageDataAsync(doc);
// Retrieved and reassembled automatically
var retrieved = await dataAccess.GetRowObjectAsync(doc.RowKey);
Console.WriteLine($"Content length: {retrieved.Content.Length}"); // Full 200KB
Chunking Algorithm:
- Detect properties > 32KB during serialization
- Split into chunks of 31KB (safe margin)
- Store as:
PropertyName,PropertyName_pt_1,PropertyName_pt_2, etc. - On retrieval, automatically detect and reassemble chunks
- Performance: Minimal overhead due to caching
Decimal Precision Preservation
Azure Table Storage doesn't natively support decimal type. ASCDataAccessLibrary converts to string for precision.
public class Transaction : TableEntityBase, ITableExtra
{
public decimal Amount { get; set; } // Preserved as string
public decimal InterestRate { get; set; } // Preserved as string
public string TableReference => "Transactions";
public string GetIDValue() => this.RowKey!;
}
var transaction = new Transaction
{
Amount = 1234.56789123456m, // Full precision preserved
InterestRate = 0.0425m // No double precision loss
};
await dataAccess.ManageDataAsync(transaction);
var retrieved = await dataAccess.GetRowObjectAsync(transaction.RowKey);
Console.WriteLine($"Amount: {retrieved.Amount}"); // 1234.56789123456 (exact)
Enum Handling
Enums are automatically serialized to/from strings.
public enum OrderStatus
{
Pending,
Processing,
Shipped,
Delivered,
Cancelled
}
public class Order : TableEntityBase, ITableExtra
{
public OrderStatus Status { get; set; }
public string TableReference => "Orders";
public string GetIDValue() => this.RowKey!;
}
var order = new Order { Status = OrderStatus.Processing };
await dataAccess.ManageDataAsync(order);
// Stored as: "Processing" (string)
// Retrieved as: OrderStatus.Processing (enum)
// Query by enum value
var processingOrders = await dataAccess.GetCollectionAsync(x =>
x.Status == OrderStatus.Processing
);
DateTime UTC Handling
All DateTime values are automatically converted to UTC.
public class Event : TableEntityBase, ITableExtra
{
public DateTime EventDate { get; set; }
public DateTime? OptionalDate { get; set; }
public string TableReference => "Events";
public string GetIDValue() => this.RowKey!;
}
var evt = new Event
{
EventDate = DateTime.Now, // Automatically converted to UTC
OptionalDate = DateTime.SpecifyKind(DateTime.Now, DateTimeKind.Local)
};
await dataAccess.ManageDataAsync(evt);
var retrieved = await dataAccess.GetRowObjectAsync(evt.RowKey);
Console.WriteLine($"Event Date Kind: {retrieved.EventDate.Kind}"); // Utc
Complex Object Serialization
Complex objects are automatically serialized to JSON.
public class Customer : TableEntityBase, ITableExtra
{
public string? Name { get; set; }
public Address? ShippingAddress { get; set; } // Complex object
public List<string>? Tags { get; set; } // Collection
public string TableReference => "Customers";
public string GetIDValue() => this.RowKey!;
}
public class Address
{
public string? Street { get; set; }
public string? City { get; set; }
public string? ZipCode { get; set; }
}
var customer = new Customer
{
Name = "John Doe",
ShippingAddress = new Address
{
Street = "123 Main St",
City = "New York",
ZipCode = "10001"
},
Tags = new List<string> { "VIP", "Priority" }
};
await dataAccess.ManageDataAsync(customer);
// Stored as JSON strings in Azure Table Storage
// Automatically deserialized on retrieval
var retrieved = await dataAccess.GetRowObjectAsync(customer.RowKey);
Console.WriteLine($"City: {retrieved.ShippingAddress.City}"); // "New York"
Performance & Optimization
Type Caching
ASCDataAccessLibrary uses aggressive type caching to minimize reflection overhead.
internal static class TableEntityTypeCache
{
// Property cache
private static readonly ConcurrentDictionary<Type, PropertyInfo[]> _writablePropertiesCache;
// Type check cache
private static readonly ConcurrentDictionary<Type, bool> _isDateTimeTypeCache;
// Property lookup cache
private static readonly ConcurrentDictionary<Type, Dictionary<string, PropertyInfo>> _propertyLookupCache;
}
Benefits:
- First serialization: ~50ms (reflection)
- Subsequent serializations: ~0.5ms (cached)
- 100x performance improvement for repeated operations
Batch Operation Optimization
// Automatic partition key grouping
var customers = GetCustomers(); // Mixed partition keys
// Library automatically groups by PartitionKey
// Azure requirement: All items in batch must have same PartitionKey
var result = await dataAccess.BatchUpdateListAsync(customers);
// Behind the scenes:
// 1. Group by PartitionKey
// 2. Chunk each group into batches of 100 (Azure limit)
// 3. Execute batches in parallel
// 4. Aggregate results
Pagination Best Practices
// DON'T: Load entire table
var allCustomers = await dataAccess.GetAllTableDataAsync(); // Can be millions!
// DO: Use pagination
string continuationToken = null;
do
{
var page = await dataAccess.GetPagedCollectionAsync(
pageSize: 100,
continuationToken: continuationToken
);
ProcessPage(page.Items);
continuationToken = page.ContinuationToken;
} while (!string.IsNullOrEmpty(continuationToken));
// BETTER: Use initial load + background loading pattern
var initialLoad = await dataAccess.GetInitialDataLoadAsync(initialLoadSize: 25);
DisplayData(initialLoad.Items); // Quick UI response
// Load rest in background
BackgroundLoadRemainingData(initialLoad.ContinuationToken);
Query Optimization Tips
// GOOD: Server-side filter
var activeCustomers = await dataAccess.GetCollectionAsync(x =>
x.Status == CustomerStatus.Active
);
// Generates: Status eq 'Active'
// Filtered on Azure side
// BAD: Client-side filter
var activeCustomers = await dataAccess.GetCollectionAsync(x =>
x.Status.ToString().ToLower() == "active"
);
// Forces full table scan + client-side filtering
// BEST: Combine PartitionKey with filter
var activeCustomers = await dataAccess.GetCollectionAsync(x =>
x.PartitionKey == "CUST-2024" &&
x.Status == CustomerStatus.Active
);
// Partition filter + secondary filter (highly optimized)
Connection Reuse
// DON'T: Create new instance per operation
for (int i = 0; i < 1000; i++)
{
var da = new DataAccess<Customer>("accountName", "accountKey");
await da.ManageDataAsync(customers[i]);
}
// DO: Reuse instance
var da = new DataAccess<Customer>("accountName", "accountKey");
for (int i = 0; i < 1000; i++)
{
await da.ManageDataAsync(customers[i]);
}
// BEST: Use batch operations
var da = new DataAccess<Customer>("accountName", "accountKey");
await da.BatchUpdateListAsync(customers); // Much faster!
Migration Guide
From Microsoft.Azure.Cosmos.Table SDK
ASCDataAccessLibrary v4.0 is built on the modern Azure.Data.Tables SDK, providing a migration path from the legacy SDK.
Key Differences
| Legacy SDK | ASCDataAccessLibrary v4.0 |
|---|---|
CloudStorageAccount |
Connection via account name/key |
CloudTableClient |
DataAccess<T> handles internally |
TableQuery<T> |
Lambda expressions |
TableOperation |
TableOperationType enum |
TableBatchOperation |
BatchUpdateListAsync() |
| Manual serialization | Automatic via TableEntityBase |
Migration Steps
Step 1: Update Entity Base Class
// OLD
using Microsoft.Azure.Cosmos.Table;
public class Customer : TableEntity
{
public string? CustomerName { get; set; }
}
// NEW
using ASCTableStorage.Models;
public class Customer : TableEntityBase, ITableExtra
{
public string? CustomerName { get; set; }
public string TableReference => "Customers";
public string GetIDValue() => this.RowKey!;
}
Step 2: Replace Connection Logic
// OLD
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("Customers");
// NEW
var dataAccess = new DataAccess<Customer>("accountName", "accountKey");
Step 3: Replace CRUD Operations
// OLD - Insert
TableOperation insertOp = TableOperation.Insert(customer);
await table.ExecuteAsync(insertOp);
// NEW - Insert
await dataAccess.ManageDataAsync(customer);
// OLD - Retrieve
TableOperation retrieveOp = TableOperation.Retrieve<Customer>("CUST", "123");
TableResult result = await table.ExecuteAsync(retrieveOp);
Customer customer = (Customer)result.Result;
// NEW - Retrieve
var customer = await dataAccess.GetRowObjectAsync("123");
// OLD - Query
TableQuery<Customer> query = new TableQuery<Customer>()
.Where(TableQuery.GenerateFilterCondition(
"Status",
QueryComparisons.Equal,
"Active"
));
var results = table.ExecuteQuery(query);
// NEW - Query
var results = await dataAccess.GetCollectionAsync(x => x.Status == "Active");
// OLD - Delete
TableOperation deleteOp = TableOperation.Delete(customer);
await table.ExecuteAsync(deleteOp);
// NEW - Delete
await dataAccess.ManageDataAsync(customer, TableOperationType.Delete);
Step 4: Replace Batch Operations
// OLD
TableBatchOperation batchOp = new TableBatchOperation();
foreach (var customer in customers)
{
batchOp.Insert(customer);
}
await table.ExecuteBatchAsync(batchOp);
// NEW
await dataAccess.BatchUpdateListAsync(customers);
Step 5: Update Query Syntax
// OLD - Complex query
string filter = TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "CUST"),
TableOperators.And,
TableQuery.GenerateFilterConditionForInt("Priority", QueryComparisons.GreaterThan, 5)
);
TableQuery<Customer> query = new TableQuery<Customer>().Where(filter);
// NEW - Lambda expression
var results = await dataAccess.GetCollectionAsync(x =>
x.PartitionKey == "CUST" && x.Priority > 5
);
API Reference
Namespaces
ASCTableStorage.Data- Core data accessASCTableStorage.Models- Entity modelsASCTableStorage.Blobs- Blob storageASCTableStorage.Sessions- Session managementASCTableStorage.Logging- Error loggingASCTableStorage.Common- Utilities
DataAccess<T> Complete API
Constructors
public DataAccess(string accountName, string accountKey)
public DataAccess(TableOptions options)
Single Entity Operations
// Async
public Task ManageDataAsync(T obj, TableOperationType direction = InsertOrReplace)
public Task ManageDataAsync(object obj, TableOperationType direction = InsertOrReplace)
// Sync
public void ManageData(T obj, TableOperationType direction = InsertOrReplace)
Query Operations - Single Entity
// Async
public Task<T> GetRowObjectAsync(string rowKeyID)
public Task<T> GetRowObjectAsync(string fieldName, ComparisonTypes howToCompare, string fieldValue)
public Task<T> GetRowObjectAsync(Expression<Func<T, bool>> predicate)
// Sync
public T GetRowObject(string rowKeyID)
public T GetRowObject(Expression<Func<T, bool>> predicate)
Query Operations - Collection
// Async
public Task<List<T>> GetAllTableDataAsync()
public Task<List<T>> GetCollectionAsync(string partitionKeyID)
public Task<List<T>> GetCollectionAsync(Expression<Func<T, bool>> predicate)
public Task<List<T>> GetCollectionAsync(List<DBQueryItem> queryTerms, QueryCombineStyle combineStyle = QueryCombineStyle.and)
public Task<List<T>> GetCollectionByFilterAsync(string odataFilter)
// Sync
public List<T> GetCollection(string partitionKeyID)
public List<T> GetCollection(Expression<Func<T, bool>> predicate)
Pagination Operations
public Task<PagedResult<T>> GetPagedCollectionAsync(
int pageSize = 100,
string continuationToken = null,
string filter = null
)
public Task<PagedResult<T>> GetPagedCollectionAsync(
string partitionKeyID,
int pageSize = 100,
string continuationToken = null
)
public Task<PagedResult<T>> GetPagedCollectionAsync(
Expression<Func<T, bool>> predicate,
int pageSize = 100,
string continuationToken = null
)
public Task<PagedResult<T>> GetInitialDataLoadAsync(
int initialLoadSize = 100,
string filter = null
)
public Task<PagedResult<T>> GetInitialDataLoadAsync(
Expression<Func<T, bool>> predicate,
int initialLoadSize = 100
)
Batch Operations
// Async
public Task<BatchUpdateResult> BatchUpdateListAsync(
List<T> data,
TableOperationType direction = InsertOrReplace,
IProgress<BatchUpdateProgress> progressCallback = null
)
public Task<BatchUpdateResult> BatchUpdateListAsync(
List<DynamicEntity> data,
TableOperationType direction = InsertOrReplace,
IProgress<BatchUpdateProgress> progressCallback = null
)
// Sync
public bool BatchUpdateList(List<T> data, TableOperationType direction = InsertOrReplace)
public bool BatchUpdateList(List<DynamicEntity> data, TableOperationType direction = InsertOrReplace)
SessionManager Static API
// Initialization
public static void Initialize(string accountName, string accountKey, Action<SessionOptions> configure = null)
// Get/Set values
public static T GetValue<T>(string key, T defaultValue = default)
public static void SetValue<T>(string key, T value)
// Checks
public static bool ContainsKey(string key)
// Removal
public static void ClearValue(string key)
public static Task ClearSessionAsync()
// Persistence
public static Task CommitSessionAsync()
public static Task RefreshSessionAsync()
// Cleanup
public static Task ShutdownAsync()
ErrorLogData API
// Constructors
public ErrorLogData()
public ErrorLogData(Exception? ex, string errDescription, ErrorCodeTypes severity, string cID = "undefined")
// Factory methods
public static ErrorLogData CreateWithCallerInfo(
string message,
ErrorCodeTypes severity,
string customerID = "undefined",
[CallerMemberName] string memberName = "",
[CallerFilePath] string filePath = "",
[CallerLineNumber] int lineNumber = 0
)
public static ErrorLogData CreateWithCallerInfo<TState>(
TState state,
Exception? exception,
Func<TState, Exception?, string> formatter,
ErrorCodeTypes severity,
string customerID = "undefined",
[CallerMemberName] string memberName = "",
[CallerFilePath] string filePath = "",
[CallerLineNumber] int lineNumber = 0
)
// Logging
public async Task LogErrorAsync(string accountName, string accountKey)
public void LogError(string accountName, string accountKey)
// Cleanup
public static async Task ClearOldDataAsync(string accountName, string accountKey, int daysOld = 60)
public static async Task ClearOldDataByType(string accountName, string accountKey, ErrorCodeTypes severity, int daysOld = 60)
QueueData<T> API
// Factory methods
public static QueueData<T> CreateFromStateList(StateList<T> stateList, string name, string? queueId = null)
public static QueueData<T> CreateFromList(List<T> list, string name, string? queueId = null)
// Persistence
public void SaveQueue(string accountName, string accountKey)
public async Task SaveQueueAsync(string accountName, string accountKey)
// Retrieval
public static async Task<QueueData<T>> GetQueueAsync(string queueId, string accountName, string accountKey)
public static async Task<List<QueueData<T>>> GetQueuesAsync(string name, string accountName, string accountKey)
// Batch operations
public static async Task<int> DeleteQueuesAsync(List<string> queueIds, string accountName, string accountKey)
public static async Task<int> DeleteQueuesMatchingAsync(string accountName, string accountKey, Expression<Func<QueueData<T>, bool>> predicate)
public static async Task<List<StateList<T>>> DeleteAndReturnAllAsync(string name, string accountName, string accountKey)
// Properties
public string ProcessingStatus { get; }
public double PercentComplete { get; }
public int TotalItemCount { get; }
public int LastProcessedIndex { get; }
StateList<T> API
// Navigation
public bool First()
public bool Last()
public bool MoveNext()
public bool MovePrevious()
// Properties
public T Current { get; }
public int CurrentIndex { get; }
public bool HasNext { get; }
public bool HasPrevious { get; }
public (T data, int index) Peek { get; }
public int Count { get; }
public string? Description { get; set; }
public DateTime LastModified { get; }
// Indexers
public T this[int index] { get; set; }
public T? this[string searchValue] { get; }
// List operations
public void Add(T item)
public void AddRange(IEnumerable<T> items, bool setCurrentToFirst = false)
public void Insert(int index, T item)
public bool Remove(T item)
public void RemoveAt(int index)
public void Clear()
public void Sort()
public void Sort(Comparison<T> comparison)
public int RemoveAll(Predicate<T> match)
public List<T> FindAll(Predicate<T> match)
// LINQ support
public IEnumerable<T> Where(Func<T, bool> predicate)
public IEnumerable<TResult> Select<TResult>(Func<T, TResult> selector)
AzureBlobs API
// Constructor
public AzureBlobs(string accountName, string accountKey, string containerName, long defaultMaxFileSizeBytes = 5242880)
// Upload
public async Task<BlobData> UploadAsync(Stream content, string blobName, string contentType,
Dictionary<string, string>? tags = null, Dictionary<string, string>? metadata = null)
public async Task<BlobData> UploadAsync(string filePath, string blobName,
Dictionary<string, string>? tags = null, Dictionary<string, string>? metadata = null)
public async Task<BlobData> UploadAsync(byte[] content, string blobName, string contentType,
Dictionary<string, string>? tags = null, Dictionary<string, string>? metadata = null)
// Download
public async Task<byte[]> DownloadAsync(string blobName)
public async Task DownloadAsync(string blobName, Stream targetStream)
public async Task DownloadAsync(string blobName, string targetFilePath)
// Query
public async Task<List<BlobData>> GetCollectionAsync(Expression<Func<BlobData, bool>> predicate, string? prefix = null)
public async IAsyncEnumerable<BlobData> GetCollectionAsyncEnumerable(Expression<Func<BlobData, bool>> predicate,
string? prefix = null, bool loadContent = true, CancellationToken cancellationToken = default)
// List
public async Task<List<BlobData>> ListBlobsAsync(string? prefix = null, bool loadContent = false)
public async IAsyncEnumerable<BlobData> ListBlobsAsyncEnumerable(string? prefix, bool loadContent,
CancellationToken cancellationToken = default)
// Tags
public async Task UpdateTagsAsync(string blobName, Dictionary<string, string> tags)
public async Task<Dictionary<string, string>> GetTagsAsync(string blobName)
// Delete
public async Task DeleteAsync(string blobName)
public async Task DeleteAsync(IEnumerable<string> blobNames)
// Properties
public string ContainerName { get; }
Best Practices
Entity Design
DO: Use meaningful PartitionKey strategy
// GOOD: Logical grouping
public class Order : TableEntityBase, ITableExtra
{
public string? CustomerId { get; set; }
// PartitionKey groups orders by customer
public void SetKeys()
{
this.PartitionKey = $"CUST-{CustomerId}";
this.RowKey = Guid.NewGuid().ToString();
}
public string TableReference => "Orders";
public string GetIDValue() => this.RowKey!;
}
DON'T: Use single partition for everything
// BAD: All entities in one partition (limits scalability)
public class Order : TableEntityBase, ITableExtra
{
public void SetKeys()
{
this.PartitionKey = "ORDERS"; // Don't do this!
this.RowKey = Guid.NewGuid().ToString();
}
}
Query Design
DO: Leverage PartitionKey in queries
// GOOD: Partition + filter (fast)
var orders = await dataAccess.GetCollectionAsync(x =>
x.PartitionKey == "CUST-123" &&
x.OrderDate >= DateTime.UtcNow.AddDays(-30)
);
DON'T: Query without PartitionKey when possible
// LESS OPTIMAL: Cross-partition scan
var orders = await dataAccess.GetCollectionAsync(x =>
x.OrderTotal > 1000
);
Batch Operations
DO: Use batch operations for multiple entities
// GOOD: Single batch operation
var orders = GetOrdersToUpdate(); // 50 orders
await dataAccess.BatchUpdateListAsync(orders);
DON'T: Loop individual operations
// BAD: 50 separate round trips
foreach (var order in orders)
{
await dataAccess.ManageDataAsync(order);
}
Error Handling
DO: Log errors with context
try
{
await ProcessOrder(order);
}
catch (Exception ex)
{
var errorLog = ErrorLogData.CreateWithCallerInfo(
state: new { OrderId = order.Id, CustomerId = order.CustomerId },
exception: ex,
formatter: (state, ex) => $"Order {state.OrderId} failed: {ex?.Message}",
severity: ErrorCodeTypes.Error
);
await errorLog.LogErrorAsync(_accountName, _accountKey);
// Handle or rethrow
throw;
}
Session Management
DO: Use auto-commit for web apps
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.AutoCommitInterval = TimeSpan.FromMinutes(5);
options.EnableAutoCleanup = true;
});
DON'T: Manually commit after every change
// AVOID: Too frequent commits
SessionManager.SetValue("Counter", 1);
await SessionManager.CommitSessionAsync(); // Unnecessary
SessionManager.SetValue("Counter", 2);
await SessionManager.CommitSessionAsync(); // Unnecessary
Troubleshooting
Common Issues
Issue: Query returns entire table
Symptom:
var results = await dataAccess.GetCollectionAsync(x =>
x.CustomMethod() // Unsupported operation
);
// Returns millions of rows!
Cause: Lambda expression contains unsupported operations, forcing client-side processing.
Solution:
- Check supported operations in lambda expression documentation
- Use fail-safe protection (automatic in v4.0)
- Simplify query to use server-side operations
// FIXED: Use supported operations
var results = await dataAccess.GetCollectionAsync(x =>
x.Status == CustomerStatus.Active &&
x.Priority >= 5
);
Issue: Batch operation fails with "Entity group transactions not supported"
Symptom:
await dataAccess.BatchUpdateListAsync(mixedEntities);
// Exception: Entity group transactions not supported across partitions
Cause: Batch contains entities with different PartitionKeys.
Solution: Library automatically groups by PartitionKey, but ensure all entities have PartitionKeys set:
// Ensure PartitionKeys are set
foreach (var entity in entities)
{
if (string.IsNullOrEmpty(entity.PartitionKey))
{
entity.PartitionKey = "DEFAULT";
}
}
await dataAccess.BatchUpdateListAsync(entities);
Issue: "Entity too large" error
Symptom:
await dataAccess.ManageDataAsync(entity);
// Exception: Entity size exceeds 1MB
Cause: Total entity size (all properties combined) exceeds Azure's 1MB limit.
Solution:
- String properties > 32KB are automatically chunked
- For other large data, use blob storage
- Consider breaking entity into multiple related entities
// Store large binary data in blob storage
var blobService = new AzureBlobs("accountName", "accountKey", "documents");
var blobData = await blobService.UploadAsync(
largeContent,
$"doc-{entity.RowKey}.pdf",
"application/pdf"
);
// Reference blob in entity
entity.DocumentBlobName = blobData.Name;
await dataAccess.ManageDataAsync(entity);
Issue: Decimal values losing precision
Symptom:
entity.Amount = 1234.56789123456m;
await dataAccess.ManageDataAsync(entity);
var retrieved = await dataAccess.GetRowObjectAsync(entity.RowKey);
// retrieved.Amount = 1234.567891235 (lost precision)
Cause: Azure Table Storage stores as double, not decimal.
Solution: ASCDataAccessLibrary v4.0 automatically preserves decimal precision by storing as string. Ensure you're using TableEntityBase as your base class.
Issue: Session not persisting after application restart
Symptom:
SessionManager.SetValue("UserData", data);
// Application restarts
var data = SessionManager.GetValue<string>("UserData"); // null
Cause: Session not committed before restart, or using process-level ID.
Solution:
- Enable auto-commit
- Use file-based or HttpContext-based session IDs
- Manually commit before shutdown
SessionManager.Initialize("accountName", "accountKey", options =>
{
options.AutoCommitInterval = TimeSpan.FromMinutes(5);
options.IdStrategy = SessionIdStrategy.MachineAndUser; // Survives restarts
});
// Or manual commit before shutdown
await SessionManager.CommitSessionAsync();
await SessionManager.ShutdownAsync();
Issue: DateTime queries not working as expected
Symptom:
var orders = await dataAccess.GetCollectionAsync(x =>
x.OrderDate > DateTime.Now.AddDays(-7)
);
// Returns unexpected results
Cause: DateTime timezone mismatch (local vs UTC).
Solution: Always use UTC for DateTime comparisons:
var orders = await dataAccess.GetCollectionAsync(x =>
x.OrderDate > DateTime.UtcNow.AddDays(-7)
);
Debug Tips
Enable Debug Logging
// See generated OData/Blob Tag filters
System.Diagnostics.Debug.WriteLine("Filters will be logged to debug output");
// Example output:
// [OData Filter] (Status eq 'Active' and Priority ge 5)
// [Blob Tag Filter] ("Year" = '2024' AND "Department" = 'Finance')
Inspect Entity Serialization
var entity = new Customer { /* ... */ };
var azureEntity = TableEntityAdapter.ToAzureTableEntity(entity);
foreach (var prop in azureEntity)
{
Console.WriteLine($"{prop.Key}: {prop.Value}");
}
Test Queries Incrementally
// Start simple
var results = await dataAccess.GetCollectionAsync(x => x.Status == "Active");
// Add complexity
var results = await dataAccess.GetCollectionAsync(x =>
x.Status == "Active" &&
x.Priority >= 5
);
// Add more
var results = await dataAccess.GetCollectionAsync(x =>
x.Status == "Active" &&
x.Priority >= 5 &&
x.CreatedDate.Year == 2024
);
Appendix
Azure Table Storage Limits
| Limit | Value |
|---|---|
| Maximum entity size | 1 MB |
| Maximum property size | 64 KB (string), 32 KB (binary) |
| Maximum properties per entity | 255 |
| Maximum batch size | 100 operations |
| Maximum PartitionKey + RowKey length | 1 KB combined |
| Maximum table name length | 63 characters |
Azure Blob Storage Limits
| Limit | Value |
|---|---|
| Maximum blob size (block blob) | ~5 TB |
| Maximum block size | 4000 MB |
| Maximum number of blocks | 50,000 |
| Maximum tags per blob | 10 |
| Maximum tag key length | 128 characters |
| Maximum tag value length | 256 characters |
Version History
v4.0.0 (Current)
- Migration to
Azure.Data.TablesSDK - Hybrid query engine with expression rewriting
- Fail-safe query protection
- Enhanced session management with file-based persistence
- Improved error logging with ILogger integration
- Queue management with
StateList<T> - Blob storage lambda expression support
- Performance optimizations with type caching
v3.x
- Legacy
Microsoft.Azure.Cosmos.TableSDK - Basic lambda expression support
- Session management
- Error logging
License
[Specify your license here]
Support
- GitHub Issues: [Repository URL]
- Documentation: [Documentation URL]
- Email: [Support email]
ASCDataAccessLibrary v4.0 - Enterprise Azure Table Storage & Blob Storage Library
© 2024 [Your Company/Name]. All rights reserved.
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net9.0
- Azure.Data.Tables (>= 12.11.0)
- Azure.Storage.Blobs (>= 12.24.0)
- Microsoft.AspNetCore.Http (>= 2.3.0)
- Microsoft.AspNetCore.Http.Abstractions (>= 2.3.0)
- Microsoft.Extensions.Configuration.FileExtensions (>= 9.0.0)
- Microsoft.Extensions.Configuration.Json (>= 9.0.0)
- Microsoft.Extensions.Hosting (>= 9.0.0)
- Microsoft.Extensions.Logging (>= 9.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 9.0.0)
- System.Configuration.ConfigurationManager (>= 9.0.0)
- Xabe.FFmpeg (>= 6.0.2)
- Xabe.FFmpeg.Downloader (>= 6.0.2)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 4.0.4 | 140 | 11/29/2025 |
| 4.0.3 | 127 | 11/29/2025 |
| 4.0.2 | 200 | 11/24/2025 |
| 4.0.1 | 214 | 11/4/2025 |
| 4.0.0 | 215 | 11/4/2025 |
| 3.1.0 | 257 | 8/28/2025 |
| 3.0.0 | 198 | 8/18/2025 |
| 2.5.0 | 169 | 8/17/2025 |
| 2.4.0 | 167 | 8/15/2025 |
| 2.3.0 | 208 | 8/14/2025 |
| 2.2.0 | 201 | 8/14/2025 |
| 2.1.0 | 200 | 8/14/2025 |
| 2.0.0 | 309 | 7/19/2025 |
| 1.0.4 | 223 | 6/30/2025 |
| 1.0.3 | 179 | 6/21/2025 |
| 1.0.2 | 217 | 6/18/2025 |
| 1.0.1 | 264 | 5/12/2025 |
| 1.0.0 | 163 | 5/10/2025 |
## **Release Notes - ASCDataAccessLibrary v4.0**
**BREAKING CHANGE:** Migrated to Azure.Data.Tables SDK (12.0+). Replaces legacy Microsoft.Azure.Cosmos.Table SDK.
Major bug fixes and new features focused on performance, reliability, and developer experience.
**Bug Fixes:**
- Fixed session cleanup issues with registered configurations (v3.1 issue resolved)
- Fixed DynamicEntity creation reliability with improved pattern-based key detection
- Fixed Lambda OR operations incorrectly converting to AND in queries
- Fixed batch operations not properly converting entities to Azure format
- Fixed null/empty string queries forcing full table scans
**New Features:**
- Hybrid Query Engine: Automatically splits lambda queries between server-side (Azure) and client-side processing for optimal performance
- Queue Management: New QueueData<T> and StateList<T> for resumable batch processing with position tracking
- Blob Storage Support: Lambda expression queries on Azure Blob Storage with tag filtering
- Enhanced Session Management: File-based persistence survives app restarts, multiple ID strategies (HttpContext, MachineAndUser, ProcessId, Custom)
- Fail-Safe Protection: Prevents accidental full table scans that could return millions of rows
**Improvements:**
- 40% faster serialization with modern SDK
- Better logging with full ILogger integration for Web, Desktop, and Console apps
- Improved batch operations with automatic partition key grouping
- Enhanced type caching for better performance
- More reliable decimal precision and DateTime UTC handling
See migration guide in documentation for upgrade path from v3.x.