RedisKit 1.0.1
See the version list below for details.
dotnet add package RedisKit --version 1.0.1
NuGet\Install-Package RedisKit -Version 1.0.1
<PackageReference Include="RedisKit" Version="1.0.1" />
<PackageVersion Include="RedisKit" Version="1.0.1" />
<PackageReference Include="RedisKit" />
paket add RedisKit --version 1.0.1
#r "nuget: RedisKit, 1.0.1"
#:package RedisKit@1.0.1
#addin nuget:?package=RedisKit&version=1.0.1
#tool nuget:?package=RedisKit&version=1.0.1
RedisKit
A production-ready, enterprise-grade Redis library for .NET 9 with advanced caching, pub/sub, and streaming features.
๐ Features
Core Features
- Caching: Generic Get, Set, Delete operations with TTL support
- Batch Operations: GetMany and SetMany for improved performance
- Key Prefixing: Support for cache key prefixes
- Pub/Sub: Type-safe publishing and subscribing with advanced pattern matching
- Streaming: Redis Streams support with consumer groups and retry mechanisms
- Multiple Serializers: JSON, MessagePack support
- Dependency Injection: Full support with .NET DI container
- High Performance Logging: Source generator based logging with EventId support
- Async/Await: Full async/await support with CancellationToken
Enterprise Features
- ๐ Circuit Breaker Pattern: Automatic failure detection and recovery
- ๐ Advanced Retry Strategies: Multiple backoff strategies (Exponential, Decorrelated Jitter, etc.)
- ๐ฅ Health Monitoring: Automatic health checks with auto-reconnection
- ๐ฏ Pattern Matching: Redis glob pattern support (
*
,?
,[abc]
,[^abc]
,[a-z]
) - ๐งน Memory Leak Prevention: Automatic cleanup of inactive handlers
- ๐ Statistics & Monitoring: Built-in metrics for subscriptions and connections
- โก High Performance: Optimized with concurrent collections and minimal allocations
- ๐ Thread Safety: All operations are thread-safe
๐ฆ Installation
Install the package via NuGet:
dotnet add package RedisKit
๐ฏ Quick Start
Minimal Setup
using RedisKit.Extensions;
var builder = WebApplication.CreateBuilder(args);
// Add Redis services with minimal configuration
builder.Services.AddRedisServices(options =>
{
options.ConnectionString = "localhost:6379";
});
var app = builder.Build();
// Use in your controllers or services
app.MapGet("/cache/{key}", async (string key, IRedisCacheService cache) =>
{
var value = await cache.GetAsync<string>(key);
return value ?? "Not found";
});
app.Run();
๐ถ Getting Started - Hello Redis!
Your First Redis Cache
using RedisKit.Extensions;
using RedisKit.Interfaces;
// 1. Setup - Add to your Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddRedisServices(options =>
{
options.ConnectionString = "localhost:6379";
});
var app = builder.Build();
// 2. Simple String Cache
app.MapPost("/hello/{name}", async (string name, IRedisCacheService cache) =>
{
// Store a simple string
await cache.SetAsync($"greeting:{name}", $"Hello, {name}!", TimeSpan.FromMinutes(5));
return $"Greeting saved for {name}";
});
app.MapGet("/hello/{name}", async (string name, IRedisCacheService cache) =>
{
// Retrieve the string
var greeting = await cache.GetAsync<string>($"greeting:{name}");
return greeting ?? "No greeting found";
});
app.Run();
Counter Example - Increment Values
public class CounterService
{
private readonly IRedisCacheService _cache;
public CounterService(IRedisCacheService cache)
{
_cache = cache;
}
public async Task<int> IncrementVisitCountAsync(string page)
{
var key = $"visits:{page}";
// Get current count
var currentCount = await _cache.GetAsync<int?>(key) ?? 0;
// Increment and save
currentCount++;
await _cache.SetAsync(key, currentCount, TimeSpan.FromDays(30));
return currentCount;
}
}
Simple User Session
public class SessionService
{
private readonly IRedisCacheService _cache;
public SessionService(IRedisCacheService cache)
{
_cache = cache;
}
// Store user session
public async Task CreateSessionAsync(string sessionId, string userId, string userName)
{
var session = new UserSession
{
UserId = userId,
UserName = userName,
LoginTime = DateTime.UtcNow
};
// Session expires in 20 minutes
await _cache.SetAsync($"session:{sessionId}", session, TimeSpan.FromMinutes(20));
}
// Get user session
public async Task<UserSession?> GetSessionAsync(string sessionId)
{
return await _cache.GetAsync<UserSession>($"session:{sessionId}");
}
// Extend session
public async Task ExtendSessionAsync(string sessionId)
{
var session = await GetSessionAsync(sessionId);
if (session != null)
{
// Reset expiration to 20 minutes
await _cache.ExpireAsync($"session:{sessionId}", TimeSpan.FromMinutes(20));
}
}
}
public class UserSession
{
public string UserId { get; set; }
public string UserName { get; set; }
public DateTime LoginTime { get; set; }
}
๐ง Configuration
Basic Configuration
services.AddRedisServices(options =>
{
options.ConnectionString = "localhost:6379";
options.DefaultTtl = TimeSpan.FromHours(1);
options.CacheKeyPrefix = "myapp:";
options.Serializer = SerializerType.MessagePack; // or JSON
});
Advanced Configuration
services.AddRedisServices(options =>
{
options.ConnectionString = "localhost:6379";
options.DefaultTtl = TimeSpan.FromHours(1);
options.CacheKeyPrefix = "myapp:";
// Retry Configuration
options.RetryConfiguration = new RetryConfiguration
{
MaxAttempts = 3,
Strategy = BackoffStrategy.ExponentialWithJitter,
InitialDelay = TimeSpan.FromSeconds(1),
MaxDelay = TimeSpan.FromSeconds(30),
JitterFactor = 0.2 // 20% jitter
};
// Circuit Breaker
options.CircuitBreaker = new CircuitBreakerSettings
{
Enabled = true,
FailureThreshold = 5,
BreakDuration = TimeSpan.FromSeconds(30),
SuccessThreshold = 2
};
// Health Monitoring
options.HealthMonitoring = new HealthMonitoringSettings
{
Enabled = true,
CheckInterval = TimeSpan.FromSeconds(30),
AutoReconnect = true,
ConsecutiveFailuresThreshold = 3
};
// Connection Timeouts
options.TimeoutSettings = new ConnectionTimeoutSettings
{
ConnectTimeout = TimeSpan.FromSeconds(5),
SyncTimeout = TimeSpan.FromSeconds(5),
AsyncTimeout = TimeSpan.FromSeconds(5),
KeepAlive = TimeSpan.FromSeconds(60)
};
});
๐ Basic Usage Examples
Simple Caching
public class ProductService
{
private readonly IRedisCacheService _cache;
public ProductService(IRedisCacheService cache)
{
_cache = cache;
}
public async Task<Product?> GetProductAsync(int productId)
{
var cacheKey = $"product:{productId}";
// Try to get from cache
var cached = await _cache.GetAsync<Product>(cacheKey);
if (cached != null)
return cached;
// Load from database
var product = await LoadFromDatabaseAsync(productId);
// Cache for 1 hour
if (product != null)
{
await _cache.SetAsync(cacheKey, product, TimeSpan.FromHours(1));
}
return product;
}
public async Task InvalidateProductAsync(int productId)
{
await _cache.DeleteAsync($"product:{productId}");
}
}
Basic Pub/Sub
public class NotificationService
{
private readonly IRedisPubSubService _pubSub;
private readonly ILogger<NotificationService> _logger;
public NotificationService(IRedisPubSubService pubSub, ILogger<NotificationService> logger)
{
_pubSub = pubSub;
_logger = logger;
}
// Publisher
public async Task SendNotificationAsync(string userId, string message)
{
var notification = new UserNotification
{
UserId = userId,
Message = message,
Timestamp = DateTime.UtcNow
};
await _pubSub.PublishAsync($"notifications:{userId}", notification);
}
// Subscriber
public async Task StartListeningAsync(string userId)
{
await _pubSub.SubscribeAsync<UserNotification>(
$"notifications:{userId}",
async (notification, ct) =>
{
_logger.LogInformation("Received notification for user {UserId}: {Message}",
notification.UserId, notification.Message);
// Process notification
await ProcessNotificationAsync(notification);
});
}
}
public class UserNotification
{
public string UserId { get; set; } = string.Empty;
public string Message { get; set; } = string.Empty;
public DateTime Timestamp { get; set; }
}
Batch Operations
public class BulkOperationService
{
private readonly IRedisCacheService _cache;
public BulkOperationService(IRedisCacheService cache)
{
_cache = cache;
}
public async Task<Dictionary<int, Product?>> GetProductsAsync(int[] productIds)
{
// Generate cache keys
var keys = productIds.Select(id => $"product:{id}");
// Get all products in one operation
var cached = await _cache.GetManyAsync<Product>(keys);
var result = new Dictionary<int, Product?>();
var missingIds = new List<int>();
// Check what we found in cache
foreach (var productId in productIds)
{
var key = $"product:{productId}";
if (cached.TryGetValue(key, out var product) && product != null)
{
result[productId] = product;
}
else
{
missingIds.Add(productId);
}
}
// Load missing from database
if (missingIds.Any())
{
var products = await LoadProductsFromDatabaseAsync(missingIds);
// Cache them
var toCache = new Dictionary<string, Product>();
foreach (var product in products)
{
result[product.Id] = product;
toCache[$"product:{product.Id}"] = product;
}
await _cache.SetManyAsync(toCache, TimeSpan.FromHours(1));
}
return result;
}
}
๐ Advanced Usage Examples
Pattern-Based Subscriptions
public class GameEventService
{
private readonly IRedisPubSubService _pubSub;
private readonly ILogger<GameEventService> _logger;
private SubscriptionToken? _token;
public GameEventService(IRedisPubSubService pubSub, ILogger<GameEventService> logger)
{
_pubSub = pubSub;
_logger = logger;
}
public async Task StartMonitoringAsync()
{
// Subscribe to all game events using pattern
_token = await _pubSub.SubscribePatternAsync<GameEvent>(
"game:*:events",
async (gameEvent, ct) =>
{
_logger.LogInformation("Game {GameId} - Event: {EventType}",
gameEvent.GameId, gameEvent.EventType);
switch (gameEvent.EventType)
{
case "player_joined":
await HandlePlayerJoinedAsync(gameEvent);
break;
case "game_started":
await HandleGameStartedAsync(gameEvent);
break;
case "game_ended":
await HandleGameEndedAsync(gameEvent);
break;
}
});
// You can also subscribe with channel metadata
await _pubSub.SubscribePatternWithChannelAsync<GameEvent>(
"game:*:critical",
async (gameEvent, channel, ct) =>
{
// Extract game ID from channel name
var parts = channel.Split(':');
var gameId = parts[1];
_logger.LogCritical("Critical event in game {GameId}: {Message}",
gameId, gameEvent.Message);
await SendAlertAsync(gameId, gameEvent);
});
}
public async Task StopMonitoringAsync()
{
if (_token != null)
{
await _token.UnsubscribeAsync();
}
}
}
Redis Streams with Consumer Groups
public class OrderProcessingService : BackgroundService
{
private readonly IRedisStreamService _streams;
private readonly ILogger<OrderProcessingService> _logger;
public OrderProcessingService(IRedisStreamService streams, ILogger<OrderProcessingService> logger)
{
_streams = streams;
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
// Create consumer group
await _streams.CreateConsumerGroupAsync("orders", "order-processors");
while (!stoppingToken.IsCancellationRequested)
{
try
{
// Read messages from stream
var messages = await _streams.ReadGroupAsync<Order>(
"orders",
"order-processors",
"processor-1",
count: 10,
cancellationToken: stoppingToken);
foreach (var message in messages)
{
try
{
await ProcessOrderAsync(message.Data);
// Acknowledge message
await _streams.AcknowledgeAsync("orders", "order-processors", message.Id);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to process order {OrderId}", message.Data?.OrderId);
// Message will be retried
}
}
// Process pending messages (retry failed ones)
var retryResult = await _streams.RetryPendingMessagesAsync<Order>(
"orders",
"order-processors",
"processor-1",
async (order) =>
{
await ProcessOrderAsync(order);
return true; // Success
},
cancellationToken: stoppingToken);
if (retryResult.FailureCount > 0)
{
_logger.LogWarning("Failed to process {Count} orders, moved to DLQ",
retryResult.DeadLetterCount);
}
}
catch (Exception ex)
{
_logger.LogError(ex, "Error in order processing loop");
await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken);
}
}
}
private async Task ProcessOrderAsync(Order? order)
{
if (order == null) return;
_logger.LogInformation("Processing order {OrderId}", order.OrderId);
// Process the order
await Task.Delay(100); // Simulate work
// Publish completion event
await _pubSub.PublishAsync($"orders:{order.OrderId}:completed", new OrderCompleted
{
OrderId = order.OrderId,
CompletedAt = DateTime.UtcNow
});
}
}
Cache-Aside Pattern with Statistics
public class CachedRepository<T> where T : class, IEntity
{
private readonly IRedisCacheService _cache;
private readonly ILogger<CachedRepository<T>> _logger;
private readonly string _entityName;
private long _hits = 0;
private long _misses = 0;
public CachedRepository(IRedisCacheService cache, ILogger<CachedRepository<T>> logger)
{
_cache = cache;
_logger = logger;
_entityName = typeof(T).Name.ToLower();
}
public async Task<T?> GetByIdAsync(string id, Func<Task<T?>> dataLoader)
{
var key = $"{_entityName}:{id}";
// Try cache first
var cached = await _cache.GetAsync<T>(key);
if (cached != null)
{
Interlocked.Increment(ref _hits);
return cached;
}
Interlocked.Increment(ref _misses);
// Load from source
var entity = await dataLoader();
if (entity != null)
{
// Cache with sliding expiration
await _cache.SetAsync(key, entity, TimeSpan.FromMinutes(15));
}
return entity;
}
public async Task<T> GetOrCreateAsync(string id, Func<Task<T>> factory)
{
var key = $"{_entityName}:{id}";
var cached = await _cache.GetAsync<T>(key);
if (cached != null)
{
Interlocked.Increment(ref _hits);
return cached;
}
Interlocked.Increment(ref _misses);
// Use distributed lock to prevent cache stampede
var lockKey = $"lock:{key}";
var lockAcquired = await _cache.SetAsync(
lockKey,
"locked",
TimeSpan.FromSeconds(30),
when: When.NotExists);
if (lockAcquired)
{
try
{
// Double-check after acquiring lock
cached = await _cache.GetAsync<T>(key);
if (cached != null)
return cached;
// Create new entity
var entity = await factory();
await _cache.SetAsync(key, entity, TimeSpan.FromMinutes(15));
return entity;
}
finally
{
await _cache.DeleteAsync(lockKey);
}
}
else
{
// Wait for other thread to populate cache
await Task.Delay(100);
return await GetByIdAsync(id, factory) ?? await factory();
}
}
public CacheStatistics GetStatistics()
{
var total = _hits + _misses;
return new CacheStatistics
{
Hits = _hits,
Misses = _misses,
HitRate = total > 0 ? (double)_hits / total : 0
};
}
}
public interface IEntity
{
string Id { get; }
}
public class CacheStatistics
{
public long Hits { get; set; }
public long Misses { get; set; }
public double HitRate { get; set; }
}
๐จ Custom Serializer Implementation
Creating a Custom Serializer
using RedisKit.Serialization;
using ProtoBuf;
// Custom Protobuf serializer
public class ProtobufRedisSerializer : IRedisSerializer
{
public string Name => "Protobuf";
public Task<byte[]> SerializeAsync<T>(T value, CancellationToken cancellationToken = default)
where T : class
{
if (value == null)
return Task.FromResult(Array.Empty<byte>());
using var stream = new MemoryStream();
Serializer.Serialize(stream, value);
return Task.FromResult(stream.ToArray());
}
public Task<T?> DeserializeAsync<T>(byte[] data, CancellationToken cancellationToken = default)
where T : class
{
if (data == null || data.Length == 0)
return Task.FromResult<T?>(null);
using var stream = new MemoryStream(data);
var result = Serializer.Deserialize<T>(stream);
return Task.FromResult<T?>(result);
}
public Task<object?> DeserializeAsync(byte[] data, Type type, CancellationToken cancellationToken = default)
{
if (data == null || data.Length == 0)
return Task.FromResult<object?>(null);
using var stream = new MemoryStream(data);
var result = Serializer.Deserialize(type, stream);
return Task.FromResult(result);
}
}
// Register custom serializer
public class CustomSerializerFactory : IRedisSerializerFactory
{
public IRedisSerializer Create(SerializerType type)
{
return type switch
{
SerializerType.Custom => new ProtobufRedisSerializer(),
_ => RedisSerializerFactory.Create(type)
};
}
}
Using Custom Serializer
// Option 1: Register globally
services.AddRedisServices(options =>
{
options.ConnectionString = "localhost:6379";
options.Serializer = SerializerType.Custom;
options.CustomSerializerFactory = new CustomSerializerFactory();
});
// Option 2: Use for specific service
services.AddSingleton<IRedisSerializer, ProtobufRedisSerializer>();
services.AddSingleton<IRedisCacheService>(provider =>
{
var database = provider.GetRequiredService<IDatabase>();
var logger = provider.GetRequiredService<ILogger<RedisCacheService>>();
var options = provider.GetRequiredService<IOptions<RedisOptions>>();
var serializer = provider.GetRequiredService<ProtobufRedisSerializer>();
return new RedisCacheService(database, logger, options.Value, serializer);
});
Compression Serializer Wrapper
using System.IO.Compression;
public class CompressedSerializer : IRedisSerializer
{
private readonly IRedisSerializer _innerSerializer;
private readonly CompressionLevel _compressionLevel;
public CompressedSerializer(
IRedisSerializer innerSerializer,
CompressionLevel compressionLevel = CompressionLevel.Optimal)
{
_innerSerializer = innerSerializer;
_compressionLevel = compressionLevel;
}
public string Name => $"Compressed_{_innerSerializer.Name}";
public async Task<byte[]> SerializeAsync<T>(T value, CancellationToken cancellationToken = default)
where T : class
{
var data = await _innerSerializer.SerializeAsync(value, cancellationToken);
using var output = new MemoryStream();
using (var gzip = new GZipStream(output, _compressionLevel))
{
await gzip.WriteAsync(data, 0, data.Length, cancellationToken);
}
return output.ToArray();
}
public async Task<T?> DeserializeAsync<T>(byte[] data, CancellationToken cancellationToken = default)
where T : class
{
using var input = new MemoryStream(data);
using var gzip = new GZipStream(input, CompressionMode.Decompress);
using var output = new MemoryStream();
await gzip.CopyToAsync(output, cancellationToken);
var decompressed = output.ToArray();
return await _innerSerializer.DeserializeAsync<T>(decompressed, cancellationToken);
}
public async Task<object?> DeserializeAsync(
byte[] data,
Type type,
CancellationToken cancellationToken = default)
{
using var input = new MemoryStream(data);
using var gzip = new GZipStream(input, CompressionMode.Decompress);
using var output = new MemoryStream();
await gzip.CopyToAsync(output, cancellationToken);
var decompressed = output.ToArray();
return await _innerSerializer.DeserializeAsync(decompressed, type, cancellationToken);
}
}
// Usage
services.AddSingleton<IRedisSerializer>(provider =>
{
var innerSerializer = RedisSerializerFactory.Create(SerializerType.MessagePack);
return new CompressedSerializer(innerSerializer, CompressionLevel.Fastest);
});
๐๏ธ Dependency Injection
// In your Program.cs or Startup.cs:
var builder = WebApplication.CreateBuilder(args);
// Add Redis services with configuration from appsettings.json
builder.Services.Configure<RedisOptions>(
builder.Configuration.GetSection("Redis"));
builder.Services.AddRedisServices(options =>
{
builder.Configuration.GetSection("Redis").Bind(options);
});
// In your services:
public class UserService
{
private readonly IRedisCacheService _cache;
private readonly IRedisPubSubService _pubSub;
private readonly IRedisStreamService _stream;
private readonly ILogger<UserService> _logger;
public UserService(
IRedisCacheService cache,
IRedisPubSubService pubSub,
IRedisStreamService stream,
ILogger<UserService> logger)
{
_cache = cache;
_pubSub = pubSub;
_stream = stream;
_logger = logger;
}
public async Task<User?> GetUserAsync(string userId)
{
// Try cache first
var cached = await _cache.GetAsync<User>($"user:{userId}");
if (cached != null)
return cached;
// Load from database
var user = await LoadFromDatabaseAsync(userId);
// Cache for future requests
if (user != null)
{
await _cache.SetAsync($"user:{userId}", user, TimeSpan.FromHours(1));
// Publish update event
await _pubSub.PublishAsync("user-updates", new UserLoadedEvent
{
UserId = userId
});
}
return user;
}
}
๐ฏ Backoff Strategies
RedisKit supports multiple backoff strategies for retry operations:
- Fixed: Constant delay between retries
- Linear: Linear increase in delay
- Exponential: Exponential increase in delay
- ExponentialWithJitter: Exponential with random jitter to prevent thundering herd
- DecorrelatedJitter: AWS-recommended strategy with decorrelated jitter
options.RetryConfiguration = new RetryConfiguration
{
Strategy = BackoffStrategy.DecorrelatedJitter,
MaxAttempts = 5,
InitialDelay = TimeSpan.FromMilliseconds(100),
MaxDelay = TimeSpan.FromSeconds(10)
};
๐ Performance Tips & Best Practices
1. Connection Management
// โ DON'T: Create new connections for each operation
public async Task BadExample()
{
var connection = await ConnectionMultiplexer.ConnectAsync("localhost");
var db = connection.GetDatabase();
await db.StringSetAsync("key", "value");
connection.Dispose(); // Connection closed!
}
// โ
DO: Use dependency injection and connection pooling
public class GoodExample
{
private readonly IRedisCacheService _cache; // Injected, pooled connection
public async Task SetValueAsync()
{
await _cache.SetAsync("key", "value");
}
}
2. Batch Operations for Better Performance
// โ DON'T: Multiple round trips
public async Task SlowApproach(string[] userIds)
{
var users = new List<User>();
foreach (var id in userIds)
{
var user = await _cache.GetAsync<User>($"user:{id}");
if (user != null) users.Add(user);
}
}
// โ
DO: Single batch operation
public async Task FastApproach(string[] userIds)
{
var keys = userIds.Select(id => $"user:{id}");
var results = await _cache.GetManyAsync<User>(keys);
var users = results.Values.Where(u => u != null).ToList();
}
3. Big Key Handling
// โ DON'T: Store huge objects as single keys
public async Task BadBigKey()
{
var hugeList = new List<Item>(1_000_000); // 1 million items!
await _cache.SetAsync("huge:list", hugeList); // This blocks Redis!
}
// โ
DO: Split large datasets
public async Task GoodBigKeyHandling()
{
var items = GetLargeDataset();
var chunks = items.Chunk(1000); // Split into chunks of 1000
var tasks = chunks.Select((chunk, index) =>
_cache.SetAsync($"items:chunk:{index}", chunk.ToList(), TimeSpan.FromHours(1))
);
await Task.WhenAll(tasks);
}
// โ
DO: Use Redis Streams for large datasets
public async Task StreamApproach(List<Item> items)
{
foreach (var batch in items.Chunk(100))
{
foreach (var item in batch)
{
await _streamService.AddAsync("items:stream", item);
}
}
}
4. Pipeline Usage
// โ DON'T: Sequential operations
public async Task SlowSequential()
{
await _cache.SetAsync("key1", "value1");
await _cache.SetAsync("key2", "value2");
await _cache.SetAsync("key3", "value3");
// 3 round trips to Redis
}
// โ
DO: Use batch/pipeline operations
public async Task FastPipeline()
{
var items = new Dictionary<string, string>
{
["key1"] = "value1",
["key2"] = "value2",
["key3"] = "value3"
};
await _cache.SetManyAsync(items, TimeSpan.FromHours(1));
// Single round trip!
}
5. Memory Optimization
// โ
Use appropriate serializers
services.AddRedisServices(options =>
{
// MessagePack: Fastest and smallest
options.Serializer = SerializerType.MessagePack;
// JSON: Human readable, larger size
// options.Serializer = SerializerType.SystemTextJson;
});
// โ
Compress large objects
public class CompressedCacheService
{
private readonly IRedisCacheService _cache;
public async Task SetCompressedAsync<T>(string key, T value) where T : class
{
if (value is string str && str.Length > 1000)
{
// Compress strings larger than 1KB
var compressed = Compress(str);
await _cache.SetAsync($"{key}:compressed", compressed);
}
else
{
await _cache.SetAsync(key, value);
}
}
}
6. Key Expiration Strategies
// โ
Use sliding expiration for frequently accessed data
public async Task<T?> GetWithSlidingExpirationAsync<T>(string key) where T : class
{
var value = await _cache.GetAsync<T>(key);
if (value != null)
{
// Reset expiration on each access
await _cache.ExpireAsync(key, TimeSpan.FromMinutes(30));
}
return value;
}
// โ
Use absolute expiration for time-sensitive data
public async Task SetDailyReportAsync(Report report)
{
var tomorrow = DateTime.UtcNow.Date.AddDays(1);
var ttl = tomorrow - DateTime.UtcNow;
await _cache.SetAsync($"report:{DateTime.UtcNow:yyyy-MM-dd}", report, ttl);
}
7. Avoid Hot Keys
// โ DON'T: Single key for global counter
public async Task IncrementGlobalCounter()
{
var count = await _cache.GetAsync<int>("global:counter");
await _cache.SetAsync("global:counter", count + 1);
// This key becomes a bottleneck!
}
// โ
DO: Distribute load across multiple keys
public async Task IncrementDistributedCounter()
{
var shard = Random.Shared.Next(0, 10); // 10 shards
var key = $"counter:shard:{shard}";
var count = await _cache.GetAsync<int>(key);
await _cache.SetAsync(key, count + 1);
}
public async Task<int> GetTotalCount()
{
var tasks = Enumerable.Range(0, 10)
.Select(i => _cache.GetAsync<int>($"counter:shard:{i}"));
var counts = await Task.WhenAll(tasks);
return counts.Sum();
}
8. Circuit Breaker for Resilience
// โ
Configure circuit breaker to prevent cascade failures
services.AddRedisServices(options =>
{
options.CircuitBreaker = new CircuitBreakerSettings
{
Enabled = true,
FailureThreshold = 5, // Open after 5 failures
BreakDuration = TimeSpan.FromSeconds(30), // Stay open for 30s
SuccessThreshold = 2 // Need 2 successes to close
};
});
9. Monitoring & Metrics
// โ
Track cache hit rates
public class MetricsCacheService
{
private readonly IRedisCacheService _cache;
private readonly IMetrics _metrics;
public async Task<T?> GetWithMetricsAsync<T>(string key) where T : class
{
var value = await _cache.GetAsync<T>(key);
if (value != null)
_metrics.Increment("cache.hits");
else
_metrics.Increment("cache.misses");
return value;
}
}
10. Pub/Sub Performance
// โ
Use pattern subscriptions wisely
public class EfficientPubSub
{
private readonly IRedisPubSubService _pubSub;
public async Task SubscribeEfficiently()
{
// Instead of subscribing to many individual channels
// Use pattern subscription
await _pubSub.SubscribePatternAsync<Event>(
"events:*", // Single pattern subscription
async (evt, ct) => await ProcessEventAsync(evt, ct)
);
}
}
๐ Performance Benchmarks
Serializer Performance Comparison
Method | JSON (ns) | MessagePack (ns) | Speed Improvement | Memory Improvement |
---|---|---|---|---|
Small Object Serialize | 331.8 | 143.2 | 2.3x faster | 5.6x less memory |
Large Object Serialize | 3,569.1 | 1,940.7 | 1.8x faster | Similar memory |
Array Serialize (100 items) | 28,143.8 | 11,556.8 | 2.4x faster | 3.2x less memory |
Small Object Deserialize | 628.0 | 256.5 | 2.4x faster | 2.1x less memory |
Async Serialize | 355.9 | 173.8 | 2.0x faster | 2.8x less memory |
Async Deserialize | 823.8 | 290.0 | 2.8x faster | 2.0x less memory |
Recommendation: Use MessagePack for production workloads requiring high performance and low memory usage.
Redis Operations Performance
Operation | Single Item | Batch (100 items) | Batch (1000 items) |
---|---|---|---|
Set | ~1ms | ~5ms | ~40ms |
Get | ~0.8ms | ~4ms | ~35ms |
Pub/Sub | ~0.5ms | N/A | N/A |
Stream Add | ~1.2ms | ~8ms | ~70ms |
Benchmarks on local Redis, actual performance depends on network latency and Redis server specs
๐ Performance Considerations
- Connection Pooling: Connections are automatically pooled and reused
- Pipelining: Commands are automatically pipelined for better throughput
- Memory Efficiency: Uses ArrayPool and MemoryPool to minimize allocations
- Concurrent Operations: Thread-safe operations with minimal locking
- Circuit Breaker: Prevents cascading failures in distributed systems
- Automatic Cleanup: Inactive handlers are automatically cleaned up to prevent memory leaks
๐งช Testing
The library includes comprehensive unit tests with 100% coverage of critical paths:
dotnet test
# Results
Passed! - Failed: 0, Passed: 124, Skipped: 7, Total: 131
๐ Requirements
- .NET 9.0 or higher
- Redis Server 5.0 or higher (6.0+ recommended for Streams support)
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Documentation
For more detailed documentation, please visit our Wiki.
๐ Known Issues
- Stream service tests are currently skipped as they require a real Redis instance
- PUBSUB NUMSUB command returns local handler count only (StackExchange.Redis limitation)
๐ฆ Roadmap
- Redis Sentinel support
- Redis Cluster support
- Distributed locking primitives
- Geo-spatial operations
- Time-series data support
- OpenTelemetry integration
- Prometheus metrics export
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net9.0
- MessagePack (>= 3.1.4)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 9.0.8)
- Microsoft.Extensions.Logging.Abstractions (>= 9.0.8)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 9.0.8)
- StackExchange.Redis (>= 2.8.58)
- System.Text.Json (>= 9.0.8)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
v1.0.1 - Updated package icon to match documentation branding