WebSpark.HttpClientUtility 2.1.1

dotnet add package WebSpark.HttpClientUtility --version 2.1.1
                    
NuGet\Install-Package WebSpark.HttpClientUtility -Version 2.1.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="WebSpark.HttpClientUtility" Version="2.1.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="WebSpark.HttpClientUtility" Version="2.1.1" />
                    
Directory.Packages.props
<PackageReference Include="WebSpark.HttpClientUtility" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add WebSpark.HttpClientUtility --version 2.1.1
                    
#r "nuget: WebSpark.HttpClientUtility, 2.1.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package WebSpark.HttpClientUtility@2.1.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=WebSpark.HttpClientUtility&version=2.1.1
                    
Install as a Cake Addin
#tool nuget:?package=WebSpark.HttpClientUtility&version=2.1.1
                    
Install as a Cake Tool

WebSpark.HttpClientUtility

NuGet Version NuGet Downloads Crawler Package License: MIT Build Status .NET 8-10 Documentation

A production-ready HttpClient wrapper for .NET 8-10 that makes HTTP calls simple, resilient, and observable.

Stop writing boilerplate HTTP code. Get built-in resilience, caching, telemetry, and structured logging out of the box.

📦 v2.0 - Now in Two Focused Packages!

Starting with v2.0, the library is split into two packages:

Package Purpose Size Use When
WebSpark.HttpClientUtility Core HTTP features 163 KB You need HTTP client utilities (authentication, caching, resilience, telemetry)
WebSpark.HttpClientUtility.Crawler Web crawling extension 75 KB You need web crawling, robots.txt parsing, sitemap generation

Upgrading from v1.x? Most users need no code changes! See Migration Guide.

📚 Documentation

View Full Documentation →

The complete documentation site includes:

  • Getting started guide
  • Feature documentation
  • API reference
  • Code examples
  • Best practices

⚡ Quick Start

Core HTTP Features (Base Package)

Install

dotnet add package WebSpark.HttpClientUtility

5-Minute Example

// Program.cs - Register services (ONE LINE!)
builder.Services.AddHttpClientUtility();

// YourService.cs - Make requests
public class WeatherService
{
    private readonly IHttpRequestResultService _httpService;
    
    public WeatherService(IHttpRequestResultService httpService) => _httpService = httpService;

    public async Task<WeatherData?> GetWeatherAsync(string city)
    {
        var request = new HttpRequestResult<WeatherData>
        {
     RequestPath = $"https://api.weather.com/forecast?city={city}",
     RequestMethod = HttpMethod.Get
        };
 
        var result = await _httpService.HttpSendRequestResultAsync(request);
        return result.IsSuccessStatusCode ? result.ResponseResults : null;
    }
}

That's it! You now have:

  • ✅ Automatic correlation IDs for tracing
  • ✅ Structured logging with request/response details
  • ✅ Request timing telemetry
  • ✅ Proper error handling and exception management
  • ✅ Support for .NET 8 LTS, .NET 9, and .NET 10 (Preview)

Web Crawling Features (Crawler Package)

Install Both Packages

dotnet add package WebSpark.HttpClientUtility
dotnet add package WebSpark.HttpClientUtility.Crawler

Register Services

// Program.cs
builder.Services.AddHttpClientUtility();
builder.Services.AddHttpClientCrawler();  // Adds crawler features

Use Crawler

public class SiteAnalyzer
{
    private readonly ISiteCrawler _crawler;
    
    public SiteAnalyzer(ISiteCrawler crawler) => _crawler = crawler;

    public async Task<CrawlResult> AnalyzeSiteAsync(string url)
    {
        var options = new CrawlerOptions
        {
            MaxDepth = 3,
            MaxPages = 100,
            RespectRobotsTxt = true
        };
        
        return await _crawler.CrawlAsync(url, options);
    }
}

🎯 Why Choose This Library?

Challenge Solution
Boilerplate Code One-line service registration replaces 50+ lines of manual setup
Transient Failures Built-in Polly integration for retries and circuit breakers
Repeated API Calls Automatic response caching with customizable duration
Observability Correlation IDs, structured logging, and OpenTelemetry support
Testing All services are interface-based for easy mocking
Package Size Modular design - install only what you need

🚀 Features

Base Package Features

  • Simple API - Intuitive request/response model
  • Authentication - Bearer token, Basic auth, API key providers
  • Correlation IDs - Automatic tracking across distributed systems
  • Structured Logging - Rich context in all log messages
  • Telemetry - Request timing and performance metrics
  • Error Handling - Standardized exception processing
  • Type-Safe - Strongly-typed request and response models
  • Caching - In-memory response caching (optional)
  • Resilience - Polly retry and circuit breaker policies (optional)
  • Concurrent Requests - Parallel request processing
  • Fire-and-Forget - Background request execution
  • Streaming - Efficient handling of large responses
  • OpenTelemetry - Full observability integration (optional)
  • CURL Export - Generate CURL commands for debugging

Crawler Package Features

  • Site Crawling - Full website crawling with depth control
  • Robots.txt - Automatic compliance with robots.txt rules
  • Sitemap Generation - Create XML sitemaps from crawl results
  • HTML Parsing - Extract links and metadata with HtmlAgilityPack
  • SignalR Progress - Real-time crawl progress updates
  • CSV Export - Export crawl results to CSV files
  • Performance Tracking - Monitor crawl speed and efficiency

📚 Common Scenarios

Enable Caching

builder.Services.AddHttpClientUtility(options =>
{
    options.EnableCaching = true;
});

// In your service
var request = new HttpRequestResult<Product>
{
    RequestPath = "https://api.example.com/products/123",
    RequestMethod = HttpMethod.Get,
    CacheDurationMinutes = 10  // Cache for 10 minutes
};

Add Resilience (Retry + Circuit Breaker)

builder.Services.AddHttpClientUtility(options =>
{
    options.EnableResilience = true;
  options.ResilienceOptions.MaxRetryAttempts = 3;
    options.ResilienceOptions.RetryDelay = TimeSpan.FromSeconds(2);
});

All Features Enabled

builder.Services.AddHttpClientUtilityWithAllFeatures();

🔄 Upgrading from v1.x

If You DON'T Use Web Crawling

No code changes required! Simply upgrade:

dotnet add package WebSpark.HttpClientUtility --version 2.0.0

Your existing code continues to work exactly as before. All core HTTP features (authentication, caching, resilience, telemetry, etc.) are still in the base package with the same API.

If You DO Use Web Crawling

Three simple steps to migrate:

Step 1: Install the crawler package

dotnet add package WebSpark.HttpClientUtility.Crawler --version 2.0.0

Step 2: Add using directive

using WebSpark.HttpClientUtility.Crawler;

Step 3: Update service registration

// v1.x (old)
services.AddHttpClientUtility();

// v2.0 (new)
services.AddHttpClientUtility();
services.AddHttpClientCrawler();  // Add this line

That's it! Your crawler code (ISiteCrawler, SiteCrawler, SimpleSiteCrawler, etc.) works identically after these changes.

Need Help? See the detailed migration guide or open an issue.

📖 Documentation

🎓 Sample Projects

Explore working examples in the samples directory:

  • BasicUsage - Simple GET/POST requests
  • WithCaching - Response caching implementation
  • WithResilience - Retry and circuit breaker patterns
  • ConcurrentRequests - Parallel request processing
  • WebCrawler - Site crawling example

🆚 Comparison to Alternatives

Feature WebSpark.HttpClientUtility Raw HttpClient RestSharp Refit
Setup Complexity ⭐ One line ⭐⭐⭐ Manual ⭐⭐ Low ⭐⭐ Low
Built-in Caching ✅ Yes ❌ Manual ❌ Manual ⚠️ Plugin
Built-in Resilience ✅ Yes ❌ Manual ❌ Manual ❌ Manual
Telemetry ✅ Built-in ⚠️ Manual ⚠️ Manual ⚠️ Manual
Type Safety ✅ Yes ⚠️ Partial ✅ Yes ✅ Yes
Web Crawling ✅ Yes ❌ No ❌ No ❌ No
.NET 8-10 Support ✅ Yes ✅ Yes ✅ Yes ✅ Yes

🤝 Contributing

Contributions are welcome! See our Contributing Guide for details.

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for your changes
  4. Ensure all tests pass
  5. Submit a pull request

📊 Project Stats

  • 252+ Unit Tests - 100% passing
  • Supports .NET 8 LTS, .NET 9, & .NET 10 (Preview)
  • MIT Licensed - Free for commercial use
  • Active Maintenance - Regular updates

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


Questions or Issues? Open an issue or start a discussion!

Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (2)

Showing the top 2 NuGet packages that depend on WebSpark.HttpClientUtility:

Package Downloads
WebSpark.Bootswatch

WebSpark.Bootswatch provides Bootswatch themes for ASP.NET Core applications. It includes custom themes and styles that can be easily integrated with ASP.NET Core MVC or Razor Pages applications.

WebSpark.HttpClientUtility.Crawler

Web crawling capabilities for WebSpark.HttpClientUtility: SiteCrawler and SimpleSiteCrawler with robots.txt compliance, HTML link extraction (HtmlAgilityPack), sitemap generation (Markdig), CSV export (CsvHelper), and real-time SignalR progress updates. Supports .NET 8 LTS, .NET 9, and .NET 10 (Preview). Requires WebSpark.HttpClientUtility base package [2.1.0]. Install both packages and call AddHttpClientUtility() + AddHttpClientCrawler() in your DI registration.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
2.1.1 0 11/12/2025
2.0.0 176 11/5/2025
1.5.1 172 11/2/2025
1.5.0 173 11/2/2025
1.4.0 187 11/2/2025
1.3.2 126 11/1/2025
1.3.0 168 10/7/2025
1.2.0 127 9/26/2025
1.1.0 223 7/1/2025
1.0.10 121 5/24/2025
1.0.8 215 5/19/2025
1.0.5 309 5/4/2025
1.0.4 166 5/3/2025
1.0.3 110 5/3/2025
1.0.2 105 5/3/2025
0.1.0 74 5/3/2025

2.1.0 - Added .NET 10 (Preview) multi-targeting support. All projects now target net8.0, net9.0, and net10.0. Updated Microsoft.Extensions packages to 10.0.0. All 291 tests passing on all three frameworks (873 test runs, 0 failures). Zero breaking changes.
2.0.0 - MAJOR: Package split into base + crawler. Base package now 163 KB with 10 dependencies (down from 13). Zero breaking changes for core HTTP users. Web crawling moved to separate WebSpark.HttpClientUtility.Crawler package. CurlCommandSaver now uses JSON Lines format. All 474 tests passing.
1.5.1 - Quality improvements: Zero-warning baseline with TreatWarningsAsErrors enabled. All 520 tests passing.
1.5.0 - Documentation website launched at https://markhazleton.github.io/WebSpark.HttpClientUtility/
1.4.0 - Added .NET 8 LTS support alongside .NET 9. Simplified DI registration.