LLamaSharp.Unofficial 0.4.1

dotnet add package LLamaSharp.Unofficial --version 0.4.1
NuGet\Install-Package LLamaSharp.Unofficial -Version 0.4.1
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="LLamaSharp.Unofficial" Version="0.4.1" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add LLamaSharp.Unofficial --version 0.4.1
#r "nuget: LLamaSharp.Unofficial, 0.4.1"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install LLamaSharp.Unofficial as a Cake Addin
#addin nuget:?package=LLamaSharp.Unofficial&version=0.4.1

// Install LLamaSharp.Unofficial as a Cake Tool
#tool nuget:?package=LLamaSharp.Unofficial&version=0.4.1

LLamaSharp - .NET Binding for llama.cpp

logo

Discord QQ Group LLamaSharp Badge LLamaSharp Badge LLamaSharp Badge LLamaSharp Badge

The C#/.NET binding of llama.cpp. It provides APIs to inference the LLaMa Models and deploy it on native environment or Web. It works on both Windows and Linux and does NOT require compiling llama.cpp yourself. Its performance is close to llama.cpp.

  • LLaMa models inference
  • APIs for chat session
  • Model quantization
  • Embedding generation, tokenization and detokenization
  • ASP.NET core integration

Installation

Firstly, search LLamaSharp in nuget package manager and install it.

PM> Install-Package LLamaSharp

Then, search and install one of the following backends:

LLamaSharp.Backend.Cpu
LLamaSharp.Backend.Cuda11
LLamaSharp.Backend.Cuda12

Here's the mapping of them and corresponding model samples provided by LLamaSharp. If you're not sure which model is available for a version, please try our sample model.

LLamaSharp.Backend LLamaSharp Verified Model Resources llama.cpp commit id
- v0.2.0 This version is not recommended to use. -
- v0.2.1 WizardLM, Vicuna (filenames with "old") -
v0.2.2 v0.2.2, v0.2.3 WizardLM, Vicuna (filenames without "old") 63d2046
v0.3.0 v0.3.0 LLamaSharpSamples v0.3.0, WizardLM 7e4ea5b

We publish the backend with cpu, cuda11 and cuda12 because they are the most popular ones. If none of them matches, please compile the llama.cpp from source and put the libllama under your project's output path. When building from source, please add -DBUILD_SHARED_LIBS=ON to enable the library generation.

FAQ

  1. GPU out of memory: Please try setting n_gpu_layers to a smaller number.
  2. Unsupported model: llama.cpp is under quick development and often has break changes. Please check the release date of the model and find a suitable version of LLamaSharp to install, or use the model we provide on huggingface.

Simple Benchmark

Currently it's only a simple benchmark to indicate that the performance of LLamaSharp is close to llama.cpp. Experiments run on a computer with Intel i7-12700, 3060Ti with 7B model. Note that the benchmark uses LLamaModel instead of LLamaModelV1.

Windows
  • llama.cpp: 2.98 words / second

  • LLamaSharp: 2.94 words / second

Usages

Model Inference and Chat Session

Currently, LLamaSharp provides two kinds of model, LLamaModelV1 and LLamaModel. Both of them works but LLamaModel is more recommended because it provides better alignment with the master branch of llama.cpp.

Besides, ChatSession makes it easier to wrap your own chat bot. The code below is a simple example. For all examples, please refer to Examples.


var model = new LLamaModel(new LLamaParams(model: "<Your path>", n_ctx: 512, repeat_penalty: 1.0f));
var session = new ChatSession<LLamaModel>(model).WithPromptFile("<Your prompt file path>")
                .WithAntiprompt(new string[] { "User:" });
Console.Write("\nUser:");
while (true)
{
    Console.ForegroundColor = ConsoleColor.Green;
    var question = Console.ReadLine();
    Console.ForegroundColor = ConsoleColor.White;
    var outputs = session.Chat(question); // It's simple to use the chat API.
    foreach (var output in outputs)
    {
        Console.Write(output);
    }
}
Quantization

The following example shows how to quantize the model. With LLamaSharp you needn't to compile c++ project and run scripts to quantize the model, instead, just run it in C#.

string srcFilename = "<Your source path>";
string dstFilename = "<Your destination path>";
string ftype = "q4_0";
if(Quantizer.Quantize(srcFileName, dstFilename, ftype))
{
    Console.WriteLine("Quantization succeed!");
}
else
{
    Console.WriteLine("Quantization failed!");
}

For more usages, please refer to Examples.

Web API

We provide the integration of ASP.NET core here. Since currently the API is not stable, please clone the repo and use it. In the future we'll publish it on NuGet.

Since we are in short of hands, if you're familiar with ASP.NET core, we'll appreciate it if you would like to help upgrading the Web API integration.

Demo

demo-console

Roadmap

✅ LLaMa model inference

✅ Embeddings generation, tokenization and detokenization

✅ Chat session

✅ Quantization

✅ State saving and loading

✅ ASP.NET core Integration

🔳 MAUI Integration

🔳 Follow up llama.cpp and improve performance

Assets

Some extra model resources could be found below:

The weights included in the magnet is exactly the weights from Facebook LLaMa.

The prompts could be found below:

Contributing

Any contribution is welcomed! You can do one of the followings to help us make LLamaSharp better:

  • Append a model link that is available for a version. (This is very important!)
  • Star and share LLamaSharp to let others know it.
  • Add a feature or fix a BUG.
  • Help to develop Web API and UI integration.
  • Just start an issue about the problem you met!

Contact us

Join our chat on Discord.

Join QQ group

License

This project is licensed under the terms of the MIT license.

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (1)

Showing the top 1 NuGet packages that depend on LLamaSharp.Unofficial:

Package Downloads
LLama.WebAPI.Unofficial

Package Description

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
0.4.1 323 5/29/2023
0.3.1 156 5/29/2023
0.3.0 139 5/29/2023

LLamaSharp 0.3.0 supports loading and saving session state, tokenization and detokenization. Besides, since 0.3.0, `LLamaModelV1` is dropped.