Rapid.AI.Ollama.Framework
1.2.5
See the version list below for details.
dotnet add package Rapid.AI.Ollama.Framework --version 1.2.5
NuGet\Install-Package Rapid.AI.Ollama.Framework -Version 1.2.5
<PackageReference Include="Rapid.AI.Ollama.Framework" Version="1.2.5" />
<PackageVersion Include="Rapid.AI.Ollama.Framework" Version="1.2.5" />
<PackageReference Include="Rapid.AI.Ollama.Framework" />
paket add Rapid.AI.Ollama.Framework --version 1.2.5
#r "nuget: Rapid.AI.Ollama.Framework, 1.2.5"
#:package Rapid.AI.Ollama.Framework@1.2.5
#addin nuget:?package=Rapid.AI.Ollama.Framework&version=1.2.5
#tool nuget:?package=Rapid.AI.Ollama.Framework&version=1.2.5
Rapid.AI.Ollama.Framework
Rapid.AI.Ollama.Framework
is a lightweight C# client library that allows developers to interact with locally running Ollama models. It supports both stateless prompt generation and contextual multi-turn chat conversations using the Ollama REST API.
โจ Features
- ๐ Stateless prompt generation using
/api/generate
- ๐ฌ Context-aware chat with conversation history using
/api/chat
- ๐ฆ Simple and easy-to-integrate C# API
- ๐ง Supports streaming output for prompt generation
๐ Getting Started
๐ฆ Prerequisites
- .NET 6 or later
- Ollama running locally (default port
http://localhost:11434
) - A downloaded model (e.g.,
llama3
,llama3.2:1b
, etc.)
๐งช Usage
โจ 1. Generate Prompt Response
using Rapid.AI.Ollama.Framework;
string result = OllamaClient.Generate("http://localhost:11434/api/generate", "What is quantum physics?", "llama3.2:1b");
Console.WriteLine(result);
This uses the /api/generate endpoint with streaming enabled, and returns a stateless response for the given prompt.
๐ฌ 2. Chat with Context (Multi-Turn)
using Rapid.AI.Ollama.Framework;
// First user message
string reply1 = OllamaClient.Chat("http://localhost:11434/api/chat", "Who was Marie Curie?", "llama3.2:1b");
Console.WriteLine("AI: " + reply1);
// Follow-up message
string reply2 = OllamaClient.Chat("http://localhost:11434/api/chat", "What was her contribution to science?", "llama3.2:1b");
Console.WriteLine("AI: " + reply2);
You can maintain context by using Chat(). The chat history is kept internally.
๐ To Clear Chat History:
OllamaClient.ClearChatHistory();
๐ Notes
Generate method uses the /api/generate endpoint and streams the output.
Chat method uses the /api/chat endpoint and maintains internal chat history.
Timeout is set to 5 minutes for long-running responses.
๐ Project Structure
OllamaClient
โโโ Generate(...) // Stateless streaming prompt generation
โโโ Chat(...) // Stateful chat with message history
โโโ ClearChatHistory() // Clears the internal chat history
๐งฑ Example Model Names
llama3
llama3.2:1b
mistral
Any other model available through Ollama
Ensure the model is already pulled by running:
ollama run llama3.2:1b
๐ License
MIT License โ free to use, modify, and distribute.
๐ค Contributions
Feature requests and improvements are welcome. Please fork and PR your changes!
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
- Readme.md added
- Chart support added