FluentAI.NET 1.0.5

dotnet add package FluentAI.NET --version 1.0.5
                    
NuGet\Install-Package FluentAI.NET -Version 1.0.5
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="FluentAI.NET" Version="1.0.5" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="FluentAI.NET" Version="1.0.5" />
                    
Directory.Packages.props
<PackageReference Include="FluentAI.NET" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add FluentAI.NET --version 1.0.5
                    
#r "nuget: FluentAI.NET, 1.0.5"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package FluentAI.NET@1.0.5
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=FluentAI.NET&version=1.0.5
                    
Install as a Cake Addin
#tool nuget:?package=FluentAI.NET&version=1.0.5
                    
Install as a Cake Tool

FLUENTAI.NET - Universal AI SDK for .NET

.NET NuGet License Build Status Tests Documentation

FluentAI.NET is a comprehensive, production-ready SDK that unifies access to multiple AI chat models under a single, elegant API. Built for .NET developers who want enterprise-grade AI capabilities without vendor lock-in or complex configuration.

๐Ÿ“‹ Table of Contents

โœจ Key Features

๐ŸŒŸ Production-Ready Architecture

โœ… Multi-Provider Support - OpenAI, Anthropic, Google AI with unified interface
โœ… Enterprise Security - Input sanitization, content filtering, risk assessment
โœ… Advanced Resilience - Rate limiting, automatic failover, circuit breakers
โœ… Performance Optimized - Response caching, memory management, streaming support
โœ… Observability Built-in - Comprehensive logging, metrics, health checks
โœ… Dependency Injection - First-class support for modern .NET patterns

๐Ÿ”ง Developer Experience

โœ… Simple Integration - Single interface for all providers
โœ… Rich Configuration - Environment variables, appsettings.json, Azure Key Vault
โœ… Comprehensive Examples - Working demos for all project types
โœ… Extensive Documentation - API reference, integration guides, troubleshooting
โœ… Strong Typing - Full IntelliSense support and compile-time safety
โœ… Async/Await - Native async support with cancellation tokens

๐Ÿ›ก๏ธ Security & Compliance

โœ… Input Validation - Prompt injection detection and prevention
โœ… Content Filtering - Configurable safety filters and risk assessment
โœ… Secure Logging - Automatic redaction of sensitive data
โœ… API Key Protection - Secure storage and rotation support
โœ… GDPR Compliance - Data protection and privacy controls

๐Ÿš€ Supported Providers

Provider Capability
OpenAI Text generation
Anthropic Text generation
Google AI Text generation

Extensible Architecture - Add custom providers with minimal code

๐Ÿ“ฆ Installation

# Single package includes all providers - no additional dependencies needed
dotnet add package FluentAI.NET

Supported Platforms:

  • .NET 8.0+
  • Windows, Linux, macOS
  • Docker containers
  • Azure Functions, AWS Lambda
  • Blazor Server, Blazor WebAssembly

๐ŸŽฏ Quick Start

1. Set Up API Keys

# Environment Variables (Recommended)
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"

2. Configure Services

ASP.NET Core
var builder = WebApplication.CreateBuilder(args);

// Add FluentAI with automatic provider detection
builder.Services.AddAiSdk(builder.Configuration)
    .AddOpenAiChatModel(builder.Configuration)
    .AddAnthropicChatModel(builder.Configuration)
    .AddGoogleGeminiChatModel(builder.Configuration);

var app = builder.Build();
Console Application
var builder = Host.CreateDefaultBuilder(args)
    .ConfigureServices((context, services) =>
    {
        services.AddAiSdk(context.Configuration)
            .AddOpenAiChatModel(context.Configuration);
    });

using var host = builder.Build();

3. Configuration (appsettings.json)

{
  "AiSdk": {
    "DefaultProvider": "OpenAI",
    "Failover": {
      "PrimaryProvider": "OpenAI",
      "FallbackProvider": "Anthropic"
    }
  },
  "OpenAI": {
    "Model": "gpt-4",
    "MaxTokens": 2000,
    "RequestTimeout": "00:02:00",
    "PermitLimit": 100,
    "WindowInSeconds": 60
  },
  "Anthropic": {
    "Model": "claude-3-sonnet-20240229",
    "MaxTokens": 2000,
    "RequestTimeout": "00:02:00",
    "PermitLimit": 50,
    "WindowInSeconds": 60
  }
}

4. Use in Your Code

public class ChatController : ControllerBase
{
    private readonly IChatModel _chatModel;

    public ChatController(IChatModel chatModel)
    {
        _chatModel = chatModel;
    }

    [HttpPost("chat")]
    public async Task<IActionResult> Chat([FromBody] ChatRequest request)
    {
        var messages = new[]
        {
            new ChatMessage(ChatRole.System, "You are a helpful assistant."),
            new ChatMessage(ChatRole.User, request.Message)
        };

        try
        {
            var response = await _chatModel.GetResponseAsync(messages);
            return Ok(new { response = response.Content, model = response.ModelId });
        }
        catch (AiSdkRateLimitException)
        {
            return StatusCode(429, "Rate limit exceeded. Please try again later.");
        }
        catch (AiSdkException ex)
        {
            return BadRequest($"AI service error: {ex.Message}");
        }
    }

    [HttpPost("stream")]
    public async IAsyncEnumerable<string> StreamChat([FromBody] ChatRequest request)
    {
        var messages = new[] { new ChatMessage(ChatRole.User, request.Message) };
        
        await foreach (var token in _chatModel.StreamResponseAsync(messages))
        {
            yield return token;
        }
    }
}

๐Ÿ”ง Advanced Usage

Multi-Provider with Automatic Failover

// Configuration enables automatic failover
{
  "AiSdk": {
    "Failover": {
      "PrimaryProvider": "OpenAI",
      "FallbackProvider": "Anthropic"
    }
  }
}

// Transparent failover - no code changes needed
var response = await _chatModel.GetResponseAsync(messages);
// Uses OpenAI first, automatically falls back to Anthropic on errors

Provider-Specific Options

// OpenAI with advanced options
var openAiOptions = new OpenAiRequestOptions
{
    Temperature = 0.8f,
    MaxTokens = 1500,
    TopP = 0.9f,
    FrequencyPenalty = 0.1f
};

var response = await _chatModel.GetResponseAsync(messages, openAiOptions);

// Anthropic with system prompt
var anthropicOptions = new AnthropicRequestOptions
{
    SystemPrompt = "You are an expert software architect.",
    Temperature = 0.7f,
    MaxTokens = 2000
};

Security Features

public class SecureChatService
{
    private readonly IChatModel _chatModel;
    private readonly IInputSanitizer _sanitizer;

    public async Task<string> ProcessSecurelyAsync(string userInput)
    {
        // Security validation
        if (!_sanitizer.IsContentSafe(userInput))
            throw new SecurityException("Unsafe content detected");

        // Risk assessment
        var risk = _sanitizer.AssessRisk(userInput);
        if (risk.RiskLevel >= SecurityRiskLevel.High)
            throw new SecurityException($"High risk content: {string.Join(", ", risk.DetectedConcerns)}");

        // Sanitize input
        var sanitizedInput = _sanitizer.SanitizeContent(userInput);
        
        var messages = new[] { new ChatMessage(ChatRole.User, sanitizedInput) };
        var response = await _chatModel.GetResponseAsync(messages);
        
        return response.Content;
    }
}

Performance Optimization

public class PerformantChatService
{
    private readonly IChatModel _chatModel;
    private readonly IResponseCache _cache;
    private readonly IPerformanceMonitor _monitor;

    public async Task<ChatResponse> GetCachedResponseAsync(IEnumerable<ChatMessage> messages)
    {
        // Check cache first
        var cachedResponse = await _cache.GetAsync(messages);
        if (cachedResponse != null)
            return cachedResponse;

        // Monitor performance
        using var operation = _monitor.StartOperation("ChatCompletion");
        
        var response = await _chatModel.GetResponseAsync(messages);
        
        // Cache successful responses
        await _cache.SetAsync(messages, null, response, TimeSpan.FromMinutes(30));
        
        // Record metrics
        _monitor.RecordMetric("ResponseLength", response.Content.Length);
        _monitor.IncrementCounter("RequestsProcessed");
        
        return response;
    }
}

Resilience and Error Handling

public class ResilientChatService
{
    public async Task<string> GetResponseWithRetryAsync(IEnumerable<ChatMessage> messages)
    {
        var retryPolicy = Policy
            .Handle<AiSdkRateLimitException>()
            .Or<HttpRequestException>()
            .WaitAndRetryAsync(
                retryCount: 3,
                sleepDurationProvider: attempt => TimeSpan.FromSeconds(Math.Pow(2, attempt)),
                onRetry: (outcome, timespan, retryCount, context) =>
                {
                    _logger.LogWarning("Retry {RetryCount} after {Delay}ms", retryCount, timespan.TotalMilliseconds);
                });

        return await retryPolicy.ExecuteAsync(async () =>
        {
            var response = await _chatModel.GetResponseAsync(messages);
            return response.Content;
        });
    }
}

๐Ÿ—๏ธ Architecture

FluentAI.NET follows clean architecture principles with clear separation of concerns:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Application Layer                        โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚   Controllers   โ”‚ โ”‚    Services     โ”‚ โ”‚  Components  โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                              โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                  FluentAI.NET Abstractions                 โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚   IChatModel    โ”‚ โ”‚  IInputSanitizerโ”‚ โ”‚ IPerformance โ”‚   โ”‚
โ”‚  โ”‚                 โ”‚ โ”‚                 โ”‚ โ”‚   Monitor    โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                              โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Provider Layer                          โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
โ”‚  โ”‚   OpenAI    โ”‚ โ”‚  Anthropic  โ”‚ โ”‚   Google    โ”‚ โ”‚ Custom โ”‚ โ”‚
โ”‚  โ”‚  Provider   โ”‚ โ”‚   Provider  โ”‚ โ”‚   Provider  โ”‚ โ”‚Providerโ”‚ โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Key Components:

  • Abstractions Layer: Core interfaces and models
  • Provider Layer: AI service implementations
  • Configuration Layer: Strongly-typed configuration
  • Security Layer: Input validation and risk assessment
  • Performance Layer: Caching, monitoring, and optimization
  • Extensions Layer: Dependency injection and fluent configuration

๐Ÿ“– Documentation

๐Ÿ“š Core Documentation

๐Ÿ› ๏ธ Integration Guides

๐Ÿ”ง Advanced Topics

  • Performance Optimization - Caching, streaming, memory management
  • Security Implementation - Input validation, content filtering
  • Error Handling - Resilience patterns, retry logic
  • Testing Strategies - Unit tests, integration tests, mocking

๐Ÿงช Examples & Demos

๐ŸŽฎ Interactive Console Demo

Explore all SDK features with our comprehensive console application:

cd Examples/ConsoleApp
dotnet run

Features Demonstrated:

  • ๐Ÿ’ฌ Basic chat completion with multiple providers
  • ๐ŸŒŠ Real-time streaming responses
  • ๐Ÿ”„ Provider comparison and failover
  • ๐Ÿ”’ Security features and input sanitization
  • โšก Performance monitoring and caching
  • โš™๏ธ Configuration management
  • ๐Ÿšจ Error handling and resilience patterns

๐Ÿ“ Code Examples

Simple Chat
var messages = new[] { new ChatMessage(ChatRole.User, "Hello!") };
var response = await chatModel.GetResponseAsync(messages);
Console.WriteLine(response.Content);
Streaming Chat
await foreach (var token in chatModel.StreamResponseAsync(messages))
{
    Console.Write(token);
}
Multiple Providers
// Configuration-based provider switching
var openAIResponse = await openAIModel.GetResponseAsync(messages);
var anthropicResponse = await anthropicModel.GetResponseAsync(messages);

// Compare responses or use as fallback

๐Ÿ› ๏ธ Integration Guides

Quick Integration Matrix

Project Type Complexity Setup Time Guide
Console App โญ Simple 5 minutes ๐Ÿ“– Guide
ASP.NET Core โญโญ Medium 15 minutes ๐Ÿ“– Guide
Blazor Server โญโญ Medium 20 minutes ๐Ÿ“– Guide
Blazor WASM โญโญโญ Advanced 30 minutes ๐Ÿ“– Guide
Class Library โญ Simple 10 minutes ๐Ÿ“– Guide
Azure Functions โญโญ Medium 15 minutes ๐Ÿ“– Guide

Configuration Patterns

All integration guides include:

  • โœ… Step-by-step setup instructions
  • โœ… Complete working code examples
  • โœ… Configuration best practices
  • โœ… Security considerations
  • โœ… Performance optimization
  • โœ… Testing strategies
  • โœ… Troubleshooting tips

๐Ÿ” Security

Built-in Security Features

// Input sanitization
var sanitizer = serviceProvider.GetRequiredService<IInputSanitizer>();
var safeContent = sanitizer.SanitizeContent(userInput);
var riskLevel = sanitizer.AssessRisk(userInput);

// Secure logging (automatic API key redaction)
_logger.LogInformation("Processing request from {UserId}", userId);
// API keys are automatically redacted from logs

// Content filtering
if (riskLevel.RiskLevel >= SecurityRiskLevel.High)
{
    throw new SecurityException("High-risk content detected");
}

Security Best Practices

  • ๐Ÿ”‘ API Key Management: Environment variables, Azure Key Vault integration
  • ๐Ÿ›ก๏ธ Input Validation: Prompt injection detection and prevention
  • ๐Ÿ” Content Filtering: Configurable safety filters and risk assessment
  • ๐Ÿ“ Secure Logging: Automatic redaction of sensitive information
  • ๐Ÿšซ Rate Limiting: Prevent abuse and DoS attacks
  • ๐Ÿ” Compliance: GDPR, CCPA, SOC 2 compliance support

โšก Performance

Performance Features

  • Response Caching: Intelligent caching with configurable TTL
  • Streaming Support: Real-time token streaming for better UX
  • Memory Management: Efficient memory usage and cleanup
  • Connection Pooling: Optimized HTTP client management
  • Metrics Collection: Built-in performance monitoring

Benchmarks

Feature Performance Memory Usage Throughput
Basic Chat ~500ms 5MB 100 req/min
Streaming ~50ms TTFB 3MB 200 req/min
Cached Response ~10ms 2MB 1000 req/min
Batch Processing ~2s/10 req 15MB 300 req/min

Benchmarks vary based on provider, model, and network conditions.

Performance Monitoring

// Built-in performance monitoring
var monitor = serviceProvider.GetRequiredService<IPerformanceMonitor>();

using var operation = monitor.StartOperation("ChatCompletion");
var response = await chatModel.GetResponseAsync(messages);

// Automatic metrics collection
// - Request duration
// - Token usage
// - Success/failure rates
// - Memory usage

๐Ÿงช Testing

Test Suite Overview

  • 235+ Tests with 90%+ code coverage
  • Unit Tests: Fast, isolated tests for all components
  • Integration Tests: Real provider testing with API keys
  • Performance Tests: Benchmarking and load testing
  • Security Tests: Vulnerability and penetration testing

Testing Your Integration

// Unit testing with mocks
[Test]
public async Task GetResponse_ShouldReturnExpectedContent()
{
    var mockChatModel = new Mock<IChatModel>();
    mockChatModel.Setup(x => x.GetResponseAsync(It.IsAny<IEnumerable<ChatMessage>>(), null, default))
        .ReturnsAsync(new ChatResponse { Content = "Test response" });

    var service = new ChatService(mockChatModel.Object);
    var result = await service.GetResponseAsync("Test message");

    Assert.AreEqual("Test response", result);
}

// Integration testing
[Test, Category("Integration")]
public async Task RealProvider_ShouldWork()
{
    var services = new ServiceCollection();
    services.AddAiSdk(Configuration).AddOpenAiChatModel(Configuration);
    
    using var provider = services.BuildServiceProvider();
    var chatModel = provider.GetRequiredService<IChatModel>();
    
    var response = await chatModel.GetResponseAsync(testMessages);
    Assert.IsNotEmpty(response.Content);
}

Running Tests

# Run all tests
dotnet test

# Run only unit tests
dotnet test --filter Category!=Integration

# Run with coverage
dotnet test --collect:"XPlat Code Coverage"

# Run performance tests
dotnet test --filter Category=Performance

๐Ÿค Contributing

We welcome contributions! See our Contributing Guide for:

  • ๐Ÿ› Bug Reports - Help us identify and fix issues
  • โœจ Feature Requests - Suggest new capabilities
  • ๐Ÿ“– Documentation - Improve guides and examples
  • ๐Ÿงช Testing - Add test coverage and scenarios
  • ๐Ÿ”ง Code Contributions - Submit pull requests

Quick Start for Contributors

# Fork and clone
git clone https://github.com/YOUR_USERNAME/fluentai-dotnet.git
cd fluentai-dotnet

# Build and test
dotnet restore
dotnet build
dotnet test

# Make your changes and submit a PR

Development Requirements:

  • .NET 8.0 SDK
  • API keys for testing (optional)
  • IDE with C# support

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

Key Points:

  • โœ… Commercial use allowed
  • โœ… Modification and distribution allowed
  • โœ… Private use allowed
  • โŒ No warranty provided
  • โŒ No liability assumed
Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.0.5 32 8/15/2025
1.0.4 93 8/14/2025
1.0.3 97 8/13/2025
1.0.2 94 8/12/2025
1.0.1 91 8/10/2025
1.0.0 68 8/9/2025

Initial release with OpenAI and Anthropic provider support, streaming capabilities, and comprehensive DI integration.