FluentAI.NET
1.0.0
See the version list below for details.
dotnet add package FluentAI.NET --version 1.0.0
NuGet\Install-Package FluentAI.NET -Version 1.0.0
<PackageReference Include="FluentAI.NET" Version="1.0.0" />
<PackageVersion Include="FluentAI.NET" Version="1.0.0" />
<PackageReference Include="FluentAI.NET" />
paket add FluentAI.NET --version 1.0.0
#r "nuget: FluentAI.NET, 1.0.0"
#:package FluentAI.NET@1.0.0
#addin nuget:?package=FluentAI.NET&version=1.0.0
#tool nuget:?package=FluentAI.NET&version=1.0.0
FLUENTAI.NET - Universal AI SDK for .NET
FluentAI.NET is a lightweight, provider-agnostic SDK that unifies access to multiple AI chat models under a single, clean API. Built for .NET developers who want to integrate AI capabilities without vendor lock-in or complex configuration.
✨ Key Features
✅ Provider Agnostic - Switch between OpenAI, Anthropic, Google, HuggingFace with one line
✅ Streaming Support - Real-time token-by-token responses for interactive experiences
✅ Built for Scale - Thread-safe, memory-efficient, with automatic retry logic
✅ DI Integration - First-class support for ASP.NET Core and modern .NET patterns
✅ Extensible - Add custom providers with minimal code
✅ Production Ready - Comprehensive error handling, resource management, observability
🚀 Supported Providers
- OpenAI (GPT-3.5, GPT-4, GPT-4o)
- Anthropic (Claude 3 Sonnet, Haiku, Opus)
- Google AI (Gemini Pro, Gemini Flash) [Coming Soon]
- HuggingFace (Transformers, Inference API) [Coming Soon]
- Local Models (Ollama, LM Studio, Custom APIs) [Coming Soon]
- Extensible architecture for any HTTP-based AI service
📦 Installation
# Core package
dotnet add package FluentAI.NET
# Provider packages (install as needed)
dotnet add package FluentAI.NET.OpenAI
dotnet add package FluentAI.NET.Anthropic
🎯 Quick Start
1. Configure Services (ASP.NET Core)
var builder = WebApplication.CreateBuilder(args);
// Add FluentAI with providers
builder.Services
.AddFluentAI()
.AddOpenAI(config => config.ApiKey = "your-openai-key")
.AddAnthropic(config => config.ApiKey = "your-anthropic-key")
.UseDefaultProvider("OpenAI");
var app = builder.Build();
2. Use in Your Code
public class ChatController : ControllerBase
{
private readonly IChatModel _chatModel;
public ChatController(IChatModel chatModel)
{
_chatModel = chatModel;
}
[HttpPost("chat")]
public async Task<IActionResult> Chat([FromBody] string message)
{
var messages = new[]
{
new ChatMessage(ChatRole.User, message)
};
var response = await _chatModel.GetResponseAsync(messages);
return Ok(response.Content);
}
[HttpPost("stream")]
public async IAsyncEnumerable<string> StreamChat([FromBody] string message)
{
var messages = new[]
{
new ChatMessage(ChatRole.User, message)
};
await foreach (var token in _chatModel.StreamResponseAsync(messages))
{
yield return token;
}
}
}
3. Configuration-Based Setup
// appsettings.json
{
"AiSdk": {
"DefaultProvider": "OpenAI"
},
"OpenAI": {
"ApiKey": "your-key-here",
"Model": "gpt-4",
"MaxTokens": 1000
},
"Anthropic": {
"ApiKey": "your-key-here",
"Model": "claude-3-sonnet-20240229",
"MaxTokens": 1000
}
}
// Program.cs
builder.Services
.AddAiSdk(builder.Configuration)
.AddOpenAiChatModel(builder.Configuration)
.AddAnthropicChatModel(builder.Configuration);
🔧 Advanced Usage
Provider-Specific Options
// OpenAI with custom options
var response = await chatModel.GetResponseAsync(messages, new OpenAiRequestOptions
{
Temperature = 0.7f,
MaxTokens = 500
});
// Anthropic with system prompt
var response = await chatModel.GetResponseAsync(messages, new AnthropicRequestOptions
{
SystemPrompt = "You are a helpful assistant.",
Temperature = 0.5f
});
Multiple Providers
public class MultiProviderService
{
private readonly IServiceProvider _serviceProvider;
public MultiProviderService(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public async Task<string> GetResponseFromProvider(string providerName, string message)
{
var chatModel = providerName.ToLower() switch
{
"openai" => _serviceProvider.GetRequiredService<OpenAiChatModel>(),
"anthropic" => _serviceProvider.GetRequiredService<AnthropicChatModel>(),
_ => throw new ArgumentException($"Provider {providerName} not supported")
};
var messages = new[] { new ChatMessage(ChatRole.User, message) };
var response = await chatModel.GetResponseAsync(messages);
return response.Content;
}
}
Error Handling
try
{
var response = await chatModel.GetResponseAsync(messages);
return response.Content;
}
catch (AiSdkConfigurationException ex)
{
// Configuration issues (missing API key, etc.)
logger.LogError(ex, "Configuration error");
throw;
}
catch (AiSdkException ex)
{
// Provider-specific errors (rate limits, API errors)
logger.LogError(ex, "AI service error");
throw;
}
🏗️ Architecture
FluentAI.NET is built on a clean, extensible architecture:
IChatModel
- Core abstraction for all providersChatModelBase
- Base implementation with retry logic and validation- Provider Implementations - OpenAI, Anthropic, and extensible for more
- Configuration - Strongly-typed options with validation
- DI Extensions - Fluent registration API
🔌 Extending with Custom Providers
public class CustomChatModel : ChatModelBase
{
public CustomChatModel(ILogger<CustomChatModel> logger) : base(logger) { }
public override async Task<ChatResponse> GetResponseAsync(
IEnumerable<ChatMessage> messages,
ChatRequestOptions? options = null,
CancellationToken cancellationToken = default)
{
// Implement your custom provider logic
// Use base.ExecuteWithRetryAsync for retry logic
// Use base.ValidateMessages for input validation
}
public override async IAsyncEnumerable<string> StreamResponseAsync(
IEnumerable<ChatMessage> messages,
ChatRequestOptions? options = null,
CancellationToken cancellationToken = default)
{
// Implement streaming logic
yield return "token";
}
}
📖 API Reference
Core Types
ChatMessage(ChatRole, string)
- Represents a chat messageChatRole
- User, Assistant, SystemChatResponse
- Complete response with usage infoTokenUsage
- Input/output token countsChatRequestOptions
- Base options for requests
Provider Options
OpenAiRequestOptions
- Temperature, MaxTokens, etc.AnthropicRequestOptions
- SystemPrompt, Temperature, etc.
🧪 Testing
Run the test suite:
dotnet test
The project includes comprehensive unit tests covering:
- Core abstractions and models
- Provider implementations
- Configuration validation
- Error handling scenarios
- Retry logic
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🤝 Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
🆘 Support
- 📖 Documentation
- 🐛 Issues
- 💬 Discussions
FluentAI.NET - Making AI integration in .NET simple, scalable, and vendor-agnostic.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- Azure.AI.OpenAI (>= 1.0.0-beta.17)
- Microsoft.Extensions.Configuration.Abstractions (>= 8.0.0)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 8.0.0)
- Microsoft.Extensions.Http (>= 8.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.0)
- Microsoft.Extensions.Options (>= 8.0.0)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 8.0.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.