MemoizR 0.0.4-beta1

This is a prerelease version of MemoizR.
There is a newer version of this package available.
See the version list below for details.
dotnet add package MemoizR --version 0.0.4-beta1                
NuGet\Install-Package MemoizR -Version 0.0.4-beta1                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="MemoizR" Version="0.0.4-beta1" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add MemoizR --version 0.0.4-beta1                
#r "nuget: MemoizR, 0.0.4-beta1"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install MemoizR as a Cake Addin
#addin nuget:?package=MemoizR&version=0.0.4-beta1&prerelease

// Install MemoizR as a Cake Tool
#tool nuget:?package=MemoizR&version=0.0.4-beta1&prerelease                

Simple concurrency model implementation in .NET

It brings a performant and save way to synchronize state over multiple threads.

It aims to be simpler and more intuitive than the current async await behaviour in C#, where not strictly following a single async path (e.g. async void, .Wait, simply not awaiting everything and even .ConfigureAwait) most of the time lead to problems. This model has also the potential to be expanded to work also in a distributed setup like the actor model.

It aims to make it save and maintainable to work with hard to concurrently synchronize state even in multi-threaded scenarios.

Even for simple usecases it can optimize performance if:

  • there are more reads than writes: The memoization leads to perf gains.
  • there are more writes than reads: The lazy evaluation leads to perf gains.

With this package it is possible to build a dependency graph that does dynamic lazy memoization. It calculates only the values that are needed and also only when they are not already calculated (memoization).

Initial inspiration by https://github.com/modderme123/reactively Yet not primarily by the reactivity but by the unique idea of dynamic lazy memoization.

MemoizR Vs. Manual Caching

You could build caching yourself of course, but using MemoizR has several advantages.

  • It's very simple to use.
  • MemoizR dependency tracking extends beyond class/component boundaries, so the benefits of clever caching and smart recalculation extends across modules.
  • MemoizR functions and methods automatically track their sources. Many approaches to caching require that the programmer manually list sources. That's not just more effort to maintain, a static source list is apt to include sources that are not needed every time, which means your MemoizR functions are apt to rerun unnecessarily.
  • MemoizR includes some clever global optimization algorithms. A MemoizR function is run only if needed and only runs once. Furthermore, even deep and complicated networks of dependencies are analyzed efficiently in linear time. Without something like MemoizR, it's easy to end up with O(n log n) searches if every use of a MemoizR function needs to check every dependency, or every change needs to notify every dependent.

Execution model

The set of MemoizR elements and their source dependencies forms a directed (and usually acyclic) graph. Conceptually, MemoizR changes enter at the roots of the graph and propagate to the leaves.

But that doesn’t mean the MemoizR system needs to respond to a change at the root by immediately deeply traversing the tree and re-executing all of the related MemoizR elements. What if the user has made changes to two roots? We might unnecessarily execute elements twice, which is inefficient. What if the user doesn’t consume a MemoizR leaf element before a new change comes around? Why bother executing that element if it’s not used? If two changes are in flight through the execution system at the same time, might user code see an unexpected mixed state (this is called a ‘glitch’)?

These are the questions the MemoizR execution system needs to address.

Push systems emphasize pushing changes from the roots down to the leaves. Push algorithms are fast for the framework to manage but can push changes even through unused parts of the graph, which wastes time on unused user computations and may surprise the user. For efficiency, push systems typically expect the user to specify a 'batch' of changes to push at once.

Pull systems emphasize traversing the graph in reverse order, from user consumed elements up towards roots. Pull systems have a simple developer experience and don’t require explicit batching. But pull systems are apt to traverse the tree too often. Each leaf element needs to traverse all the way up the tree to detect changes, potentially resulting in many extra traversals.

MemoizR is a hybrid push-pull system. It pushes dirty notifications down the graph, and then executes MemoizR elements lazily on demand as they are pulled from leaves. This costs the framework some bookkeeping and an extra traversal of its internal graph. But the developer wins by getting the simplicity of a pull system and the most of the execution efficiency of a push system.

  /*
     Initialize Graph without evaluation
        v1
        | \ 
       m1  m2
         \ |
          m3
  */
var f = new MemoFactory();
var v1 = f.CreateSignal(1, "v1");
var m1 = f.CreateMemoizR(() => v1.Get(), "m1");
var m2 = f.CreateMemoizR(() => v1.Get() * 2, "m2");
var m3 = f.CreateMemoizR(() => m1.Get() + m2.Get(), "m3");

// Get Values
m3.Get(); // calc  m1 + 2 * m1 => ( 1 + 2 * 1 ) = 3

// Change
v1.Set(2); // Set is not triggering evaluation of graph
m3.Get(); // calc  m1 + 2 * m1 => ( 1 + 2 * 1 ) = 6
m3.Get(); // noop => 6

v1.Set(3); // Set is not triggering evaluation of graph
v1.Set(2); // Set is not triggering evaluation of graph
m3.Get(); // noop => 6 ( because the last time the Graph was evaluated v1 was already 2 )

It also works if the Graph is not stable at runtime. MemoizR can handle if the Graph changes like:

var m3 = f.CreateMemoizR(() => v1.Get() ? m1.Get() : m2.Get());
Product Compatible and additional computed target framework versions.
.NET net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • net7.0

    • No dependencies.

NuGet packages (2)

Showing the top 2 NuGet packages that depend on MemoizR:

Package Downloads
MemoizR.Reactive

Package Description

MemoizR.StructuredConcurrency

Package Description

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
0.1.5 70 11/17/2024
0.1.4 120 5/25/2024
0.1.3 115 5/7/2024
0.1.2 127 5/6/2024
0.1.1 84 5/1/2024
0.1.1-rc.11 58 4/29/2024
0.1.1-rc.10 59 4/23/2024
0.1.1-rc.9 55 4/18/2024
0.1.1-rc.8 64 4/13/2024
0.1.1-rc.7 58 4/11/2024
0.1.1-rc.6 59 4/10/2024
0.1.1-rc.5 58 4/4/2024
0.1.1-rc.4 55 4/1/2024
0.1.1-rc.3 69 3/24/2024
0.1.1-rc.2 63 2/17/2024
0.1.1-rc.1 163 1/4/2024
0.1.0-rc9 203 11/6/2023
0.1.0-rc8 164 10/26/2023
0.1.0-rc7 140 10/24/2023
0.1.0-rc6 181 10/21/2023
0.1.0-rc5 128 10/19/2023
0.1.0-rc4 150 10/14/2023
0.1.0-rc3 141 10/13/2023
0.1.0-rc2 137 10/11/2023
0.1.0-rc10 122 11/12/2023
0.1.0-rc1 136 10/10/2023
0.1.0-rc.11 71 1/4/2024
0.1.0-alpha2 143 10/6/2023
0.1.0-alpha1 125 10/6/2023
0.0.4-rc4 137 9/24/2023
0.0.4-rc3 121 9/23/2023
0.0.4-rc2 121 9/23/2023
0.0.4-rc1 120 9/22/2023
0.0.4-beta1 121 9/21/2023
0.0.4-alpha1 118 9/19/2023
0.0.3-beta-1 109 9/15/2023
0.0.2-rc4 96 8/30/2023
0.0.2-rc3 105 8/30/2023
0.0.2-rc2 108 8/30/2023
0.0.2-rc1 109 8/30/2023
0.0.2-beta2 111 8/30/2023
0.0.2-beta1 120 8/29/2023
0.0.1-beta1 107 8/28/2023
0.0.1-alpha2 114 8/28/2023
0.0.1-alpha1 108 8/27/2023