AchieveAi.LmDotnetTools.LmConfig 1.0.2

There is a newer version of this package available.
See the version list below for details.
dotnet add package AchieveAi.LmDotnetTools.LmConfig --version 1.0.2
                    
NuGet\Install-Package AchieveAi.LmDotnetTools.LmConfig -Version 1.0.2
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="AchieveAi.LmDotnetTools.LmConfig" Version="1.0.2" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="AchieveAi.LmDotnetTools.LmConfig" Version="1.0.2" />
                    
Directory.Packages.props
<PackageReference Include="AchieveAi.LmDotnetTools.LmConfig" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add AchieveAi.LmDotnetTools.LmConfig --version 1.0.2
                    
#r "nuget: AchieveAi.LmDotnetTools.LmConfig, 1.0.2"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package AchieveAi.LmDotnetTools.LmConfig@1.0.2
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=AchieveAi.LmDotnetTools.LmConfig&version=1.0.2
                    
Install as a Cake Addin
#tool nuget:?package=AchieveAi.LmDotnetTools.LmConfig&version=1.0.2
                    
Install as a Cake Tool

LmDotnet - Large Language Model SDK for .NET

LmDotnet is a comprehensive .NET SDK for working with large language models (LLMs) from multiple providers including OpenAI, Anthropic, and OpenRouter.

Features

  • Multi-Provider Support: Unified interface for OpenAI, Anthropic, OpenRouter, and more
  • Streaming & Synchronous: Support for both streaming and traditional request/response patterns
  • Middleware Pipeline: Extensible middleware for logging, caching, function calls, and usage tracking
  • Type Safety: Strongly-typed models and responses
  • Performance Optimized: Built for high-throughput production scenarios
  • Comprehensive Testing: Extensive test coverage with mocking utilities

Quick Start

Installation

dotnet add package AchieveAi.LmDotnetTools.LmCore
dotnet add package AchieveAi.LmDotnetTools.OpenAIProvider
dotnet add package AchieveAi.LmDotnetTools.AnthropicProvider

Basic Usage

using AchieveAi.LmDotnetTools.LmCore.Agents;
using AchieveAi.LmDotnetTools.LmCore.Messages;
using AchieveAi.LmDotnetTools.OpenAIProvider.Agents;

// Create an agent
var agent = new OpenClientAgent("MyAgent", openClient);

// Send a message
var messages = new[] { new TextMessage { Role = Role.User, Text = "Hello!" } };
var response = await agent.GenerateReplyAsync(messages);

OpenRouter Usage Tracking

LmDotnet includes comprehensive usage tracking for OpenRouter, providing automatic token and cost monitoring.

Key Features

  • Automatic Integration: Seamlessly activated when using OpenRouter as provider
  • Inline Usage Preferred: Uses usage data directly from API responses when available
  • Intelligent Fallback: Falls back to generation endpoint lookup when needed
  • Performance Optimized: In-memory caching with configurable TTL
  • Zero Configuration: Works out-of-the-box with sensible defaults
  • Comprehensive Logging: Structured logging for monitoring and debugging

Quick Setup

# Environment variables
export ENABLE_USAGE_MIDDLEWARE=true
export OPENROUTER_API_KEY=sk-or-your-api-key-here
export USAGE_CACHE_TTL_SEC=300
// Usage data automatically provided in dedicated UsageMessage
var options = new GenerateReplyOptions { ModelId = "openai/gpt-4" };
var messages = await agent.GenerateReplyAsync(userMessages, options);

// Access usage information from UsageMessage
var usageMessage = messages.OfType<UsageMessage>().LastOrDefault();
if (usageMessage != null)
{
    var usage = usageMessage.Usage;
    Console.WriteLine($"Tokens: {usage.TotalTokens}, Cost: ${usage.TotalCost:F4}");
}

Comprehensive Documentation

For detailed configuration, troubleshooting, and examples:

📖 Complete OpenRouter Usage Tracking Guide

Project Structure

LmDotnet/
├── src/
│   ├── LmCore/              # Core interfaces and models
│   ├── OpenAIProvider/      # OpenAI and OpenRouter provider
│   ├── AnthropicProvider/   # Anthropic Claude provider
│   ├── LmConfig/           # Configuration and agent factories
│   ├── LmEmbeddings/       # Embedding services
│   └── LmTestUtils/        # Testing utilities
├── tests/                  # Comprehensive test suite
└── docs/                   # Additional documentation

Supported Providers

Provider Models Streaming Function Calls Usage Tracking
OpenAI GPT-3.5, GPT-4, GPT-4 Turbo
OpenRouter 100+ models Enhanced
Anthropic Claude 3 (Sonnet, Haiku, Opus)
Custom Extensible 🔧 Configurable

Advanced Features

Middleware Pipeline

// Custom middleware for logging, caching, etc.
public class CustomMiddleware : IStreamingMiddleware
{
    public async Task<IAsyncEnumerable<IMessage>> InvokeStreamingAsync(
        MiddlewareContext context, IStreamingAgent agent, CancellationToken cancellationToken)
    {
        // Pre-processing
        yield return await agent.GenerateReplyStreamingAsync(context.Messages, context.Options, cancellationToken);
        // Post-processing
    }
}

Function Calling

var functions = new[]
{
    new FunctionDefinition
    {
        Name = "get_weather",
        Description = "Get current weather",
        Parameters = new { location = new { type = "string" } }
    }
};

var options = new GenerateReplyOptions { Functions = functions };
var response = await agent.GenerateReplyAsync(messages, options);

Performance Monitoring

Built-in performance tracking and telemetry:

// Performance metrics automatically collected
var metrics = performanceTracker.GetMetrics();
Console.WriteLine($"Average latency: {metrics.AverageLatency}ms");
Console.WriteLine($"Token throughput: {metrics.TokensPerSecond}");

Configuration

Environment Variables

Variable Default Description
ENABLE_USAGE_MIDDLEWARE true Enable OpenRouter usage tracking
OPENROUTER_API_KEY - OpenRouter API key (required for usage tracking)
USAGE_CACHE_TTL_SEC 300 Usage cache TTL in seconds
ENABLE_INLINE_USAGE true Prefer inline usage over fallback

Dependency Injection

// In Program.cs or Startup.cs
services.AddLmDotnet(configuration);
services.ValidateOpenRouterUsageConfiguration(configuration);

Testing

Comprehensive testing utilities included:

// Mock HTTP responses
var handler = FakeHttpMessageHandler.CreateOpenAIResponseHandler("Hello!");
var httpClient = new HttpClient(handler);

// Mock streaming responses  
var handler = FakeHttpMessageHandler.CreateSseStreamHandler(events);

// Performance testing
var agent = new TestAgent();
var metrics = await PerformanceTestHelper.MeasureLatency(agent, messages);

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass
  5. Submit a pull request

Documentation

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support


Built with ❤️ for the .NET community

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (1)

Showing the top 1 NuGet packages that depend on AchieveAi.LmDotnetTools.LmConfig:

Package Downloads
AchieveAi.LmDotnetTools.Misc

Miscellaneous utilities and helper functions for the LmDotnetTools library ecosystem.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.0.10 0 8/20/2025
1.0.8 10 8/18/2025
1.0.7 10 8/18/2025
1.0.6 17 8/16/2025
1.0.5 17 8/16/2025
1.0.4 67 8/15/2025
1.0.3 97 8/12/2025
1.0.2 170 8/5/2025
1.0.1 168 8/5/2025
1.0.0 25 8/1/2025

See CHANGELOG.md for detailed release notes.