TAFHub 1.0.4

dotnet add package TAFHub --version 1.0.4
NuGet\Install-Package TAFHub -Version 1.0.4
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="TAFHub" Version="1.0.4" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add TAFHub --version 1.0.4
#r "nuget: TAFHub, 1.0.4"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install TAFHub as a Cake Addin
#addin nuget:?package=TAFHub&version=1.0.4

// Install TAFHub as a Cake Tool
#tool nuget:?package=TAFHub&version=1.0.4

TAFHub

TAFHub is a data combinatorial class targeting Microsoft DevOps tests and PowerShell.

Introduction

TAFHub Class

The TAFHub can be instantiated using a JSON formatted configuration file. This is useful when reusing the combinatorial data between tests. The use of an external configuration file also means that the sources and datagroups can be changed without changing the source of the test, or rebuilding it.

The TAFHub can also be instatiated without a configuration fle. The configuration is then added via API, using:

  1. AddSource() adds a source file to the TAFHub configuration
  2. AddDataGroup() adds a TAFDataGroup to the TAFHub configuration

Once the TAFHub configuration has been established, use Open() to validate and process the sources into the operational cache before use. Open() will remove any existing caches that are referenced by the configuration, and create a new caches based on the source files in the configuration. Open() can only be called once in the life of a TAFHub.

To reuse a previously populated cache, without removing and recreating it from source files, use the ReOpen() method. You can add TAFDataGroups via JSON or API and use ReOpen() to use existing cached data referenced by the datagroup. This is useful when a suite of tests run one after another on the same very large data set. The cache with the large data can be loaded in the first test of the suite and reused with ReOpen() in subsequent tests.

You may also use:

  1. SetReport() give the path to the result file to use
  2. Report() generates a PASS/FAIL csv result file
  3. PrintDataTable() useful when debugging
  4. SetOutput() writes returned application testing data such as order numbers to a cache
  5. GetOutput() reads back the saved application data in a datatable
  6. Close() releases resources but does not delete any cached data

NOTE: In a DevOps test TAFHub resources are release automatically at the end of each test. In a PowerShell session, resources are only released at the end of the PowerShell session. To release during the session, use Close().

TAFDataGroup Class

The TAFDataGroup is based on the datagroup spec in the TAFHub. It has a simple API:

  1. TAFDataGroup() constructor creates the datagroup and activates it for reporting
  2. Next() returns the next row as a DataTable - headings and a single row
  3. IsFirst() is true when the current row is the first row
  4. IsLast() is true when the current row is the last row
  5. Export() writes the datagroup result to a CSV file as it is generated
  6. Close() removes resources used by the datagroup and inactivates it use in reporting

Combinatorial Plan Types

The datagroup is the combinatorial concatenation of the named sources. They are concatenated in the same order as you list them. The first source will be on the left and will run through it's values the fastest.

Plan Type Description
EVERY n The first row will always be row zero. This is useful for debugging. EVERY 3 would result in rows 0, 3, 6, 9 etc.
PSEUDO n In each block of n rows, a random row is selected. PSEUDO 8 is a block of rows 0-7. One is selected at random.
RANDOM Rows are selected at random. There may be duplicates.
RANDOM UNIQUE Rows are selected at random, but not repeated. There are no duplicates.

Caching Source Data

All data read from CSV and Excel data files is written to a cache so that it can be accessed in a way that supports the efficient construction of the combinatorial datagroup result rows.

If you name a source 'Products' then this will be written to the default cache. If you name it 'Sales.Products' then the products data will be written to the Sales cache.

When you use Open() any named caches and stored source data are overwritten with the new data. At the end of a test no data is removed. The next test can use Reopen() to access data from any cache left over from a previous test.

Filters

A filter is a SELECT statement that is used in Sqlite to modify access to the data source. Look at this example:

	"Name": "Logins.xls1",
	"Type": "xls",
	"Path": "c:\\temp\\TAFHub\\test1.xls;test1",
	"Filter": "SELECT price, country FROM xls1 WHERE NOT product = 'boat'"

When you include the filter you can restrict the columns included in the datagroup and limit the rows to be included when the datagroup is being constructed. When a filter is included in a TAFHub config file the number of rows that will counted for that source will be the number AFTER the application of the filter.

IMPORTANT: The name of the table in the FROM clause of the SELECT statememt in the filter MUST BE the name of the data source, NOT INCLUDING the cache name. If these things do not match then the filter can not work.

You can also add and remove filters in the test by using the filter API:

AddFilter("Logins.Accounts", "SELECT ...");
ClearFilter("Logins.Accounts");

When you Apply or Clear a filter on a source, the source row count is automatically resampled. It can be useful to apply a filter during the test. Consider this use case.

You are using two datagroups to impliment an outer Account selection loop, then use an inner Product selection loop. The idea is to select an account then order a number of random products.

Remember, the datagroup is NOT constructed as a JOIN. If you read the State of the Account and then apply one or more filters to the sources used to construct the Products datagroup where you filter on the value of the State, then you will have the inner Products datagroup reflect only those products available in the same State as the Account.

You may also include as a filtered source a previously Export()ed datagroup. You can ReOpen() a cache that includes the data from SetOutput(), and read it via a filter as a datagroup.

Sqlite

A cache is an Sqlite database stored in TEMP. A data source is a table in a database. The default cache is 'TAFHub.db'. A named cache "Accounts" will be held in 'TAFHub-Accounts.db'. You can install and use the 'sqlite3' command line tool to examine the cache database contents.

Saving Temporary Data Between Tests

If one test places orders, and next test needs to know the order numbers, then the first test can write the intermediate order numbers to a source in a cache.

SetOutput("Output.Orders", "OrderNo", ordernum);

The SetOutput method writes to the Orders source in the Output cache. SetOutput supports a <Key><Value> paired structure for saving data. In a later test the Output cache can be reopened and the "Output.Orders" source can be used like any other data source in a TAFDataGroup.

Using SetOutput() without ReOpen()'ing the named cache will delete the cache and create a new one. To continue to use the cache across tests and accumulate temporary data, always ReOpen() the named cache you are using for that purpose.

It is more common to use Temporary stored data by accessing the data via the GetOutput() method.

DataTable dt = GetOutput("Output.Orders", "OrderNo");

The resultant datatable will hold all the Values that match the Key of "OrderNo". The same sourse can be used to store any kind of temporary data with a mix of keys. The Value will always be a string.

DevOps - C# Unit Tests

Microsoft DevOps uses C# tests built in Visual Studio. When the tests are built in DevOps they are available to be deployed to DevOps managed test servers.

When you include the TAHub Nuget package in your test project, any other dependant packages from Nuget will also be downloaded.

Required Nuget Packages

  1. ExcelDataReader 3.6.0 - Reads the source files
  2. Newtonsoft.Json 13.0.1 - Manages the JSON configuration file
  3. Microsoft.Data.SQLite 5.0.13 - Manages the SQLite source data cache
  4. System.Collections 4.3.0 - Manages lists and dictionaries

C# Example Code

The following code has the TAFHub configuration read from a JSON file.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Data;
using TestPro.TAF;

namespace TAFHubTest
{
    [TestClass]
    public partial class UnitTest
    {
        [TestMethod]
        public void TestMethod1()
        {
            DataTable dt;

            try
            {
                TAFHub t = new TAFHub(@"c:\Temp\TAFHub\TAFhub.json");
                t.Open();

                TAFDataGroup dg1 = new TAFDataGroup(t, "dg1");
                dt = dg1.Next();
                while (dt != null)
                {
                    if (dg1.IsFirst())
                    {
                        Console.WriteLine(">First row");
                        t.PrintDataTable(dt, true); // with column header
                    }
                    else
                        t.PrintDataTable(dt, false);

                    if (dg1.IsLast())
                    {
                        Console.WriteLine(">Last row");
                        break; // the while loop
                    }

                    dt = dg1.Next();
                }

				t.Close();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
                Assert.Fail();
            }
        }
    }
}

TAFHub JSON Config File

NOTE: In the Path that specifies an XLS or XLSX spreadsheet file you must include the nominated sheet name seperated by a semicolon ";".

NOTE: Data Source names support an optional cache name. If you use the name "Accounts" then the accounts data is stored in the default cache. If you use the name "Logins.Accounts" then the accounts data will be stored in the "Logins" cache.

If the first test in a suite loads alot of data that is intended to be reused in other tests in the suite then place the large data set in a named cache. Later tests then use ReOpen() and do no need to load this data again from its original source.

If the name of the cache used is "Logins", then the later tests should use Open() to load any data they need for themselves and ReOpen("Logins") to add back into the TAFHub configuration the data sources previously stored in the "Logins" cache.

When specifying the data sources for a DataGroup, you can use data souces from any cache.

{
	"Sources": [
	{
		"Name": "csv1",
		"Type": "csv",
		"Path": "c:\\temp\\TAFHub\\test1.csv"
	},
	{
		"Name": "csv2",
		"Type": "csv",
		"Path": "c:\\temp\\TAFHub\\test2.csv"
	},
	{
		"Name": "Logins.xls1",
		"Type": "xls",
		"Path": "c:\\temp\\TAFHub\\test1.xls;test1",
		"Filter": "SELECT price, country FROM xls1 WHERE NOT product = 'boat'"
	},
	{
		"Name": "xlsx1",
		"Type": "xlsx",
		"Path": "c:\\temp\\TAFHub\\test2.xlsx;test2"
	}
	],
	
	"Datagroups": [
	{
		"Name": "dg1",
		"Groupsources": [
			{	
				"Name": "csv1"
			},
			{
				"Name": "csv2"
			},
		],
		"Plan": "EVERY 1",
	},
		{
		"Name": "dg2",
		"Groupsources": [
			{	
				"Name": "csv1"
			},
			{
				"Name": "csv2"
			},
			{
				"Name": "Logins.xls1",
			}
		],
		"Plan": "EVERY 2",
		"Limit": "7"
	},
		{
		"Name": "pseudo",
		"Groupsources": [
			{	
				"Name": "csv1"
			},
			{
				"Name": "csv2"
			},
			{
				"Name": "Logins.xls1",
			}
		],
		"Plan": "PSEUDO 4",
		"Limit": "99"
	},
		{
		"Name": "random",
		"Groupsources": [
			{	
				"Name": "csv1"
			},
			{
				"Name": "csv2"
			},
			{
				"Name": "Logins.xls1"
			},
			{
				"Name": "xlsx1"
			}
		],
		"Plan": "RANDOM",
		"Limit": "10%"
	},
		{
		"Name": "unique",
		"Groupsources": [
			{	
				"Name": "csv1"
			},
			{
				"Name": "csv2"
			},
			{
				"Name": "Logins.xls1"
			},
			{
				"Name": "xlsx1"
			}
		],
		"Plan": "RANDOM UNIQUE",
		"Limit": "50%"
	}
	]
}

TAFHub Without a JSON Config File

using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Collections.Generic;
using System.Data;
using TestPro.TAF;

namespace AJAX.Test
{
	[TestClass]
    public class UnitTest
    {
        [TestMethod]
        public void SampleTest()
        {
            DataTable dt;

            try
            {
                TAFHub t = new TAFHub();

                t.AddSource
                ( "csv1", "csv",
                  "c:\\temp\\TAFHub\\test1.csv"
                );

                t.AddSource
                ("Logins.xls1", "xls",
                  "c:\\temp\\TAFHub\\test1.xls;test1"
                );

                t.AddDataGroup
                (
                    "donkey",
                    new List<string> { "csv1", "Logins.xls1" },
                    "EVERY 1",
                    "20"
                );

				// Open() will create two caches, default and "Logins".
                t.Open();

                TAFDataGroup dg = new TAFDataGroup(t, "donkey");

                dt = dg.Next();
                while (dt != null)
                {
                    if (dg.IsFirst())
                    {
                        Console.WriteLine(">First row");
                        t.PrintDataTable(dt, true); // with column header
                    }
                    else
                        t.PrintDataTable(dt, false);

                    if (dg.IsLast())
                    {
                        Console.WriteLine(">Last row");
                        break; // the while loop
                    }

                    dt = dg.Next();
                }

				t.Close();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
                Assert.Fail();
            }
        }
    }
}

The contents of a data table can be accessed like this:

for (i = 0; i < dt.Rows.Count; i++) {
	Console.WriteLine(dt.Rows[i].["ColName"]);
}

Or

foreach(row in dt) {
	Console.WriteLine($"{row.Price}, {row.Amount}");
}

PowerShell

Introduction

Custom .NET code such as the TAFHub is available to PowerShell scripts via PowerShell Reflection. Using the following to load the dll into the PowerShell session.

[System.Reflection.Assembly]::LoadFrom($path)

Once all the required dlls are loaded the TAFHub classes and all the supporting classes are available. You can create a TAFHub and TAFDataGroup as follows:

Instantiate the classes and use the methods as required.

Powershell 5.1 Example

$t = [TestPro.TAF.TAFHub]::new("c:\Temp\TAFHub\TAFhub.json")
$t.Open()
$dg = [TestPro.TAF.TAFDataGroup]::new($t, "dg1")
$dt = $dg.Next()
while ($null -ne $dt) {
    if ($dg.IsFirst())
    {
        Write-Output ">First row"
        $t.PrintDataTable($dt, $true)
    }
    else
    {
        $t.PrintDataTable($dt, $false)
    }

    if ($dg.IsLast())
    {
        Write-Output ">Last row"
    }

    $dt = $dg.Next()
}
$t.Close()

Installing Libraries into the Powershell Session

Powershell requires the necessary external libraries to be loaded into the Powershell runtime session. Follow the script below to load them from the TAFHub install zip file.

  1. Unzip the "TAFHub.1.x.x.zip" file into a "TAFHub" folder.
  2. Run the included "scripts\Setup.ps1" script. This will register all the dlls with PowerShell.
  3. Consult the included "scripts\Sample1.ps1" etc.

You can now create and use TAFHub with PowerShell.

TAFHubAnalyser

The TAFHubAnalyser is a tool that reads, validates and reports on a TAFHub json configuation file. It reads the json file, and effectively Open()s all the sources to validate access.

NOTE: Filters WILL BE appied if included in the configuration.

WARNING: Depending on the location of your TEMP folder this WILL remove and recreate the Sqlite based underlyng caches, just as would happen when Open() is used in a test.

The tool reports on each source in the configuration.

  1. Source name
  2. Number of rows
  3. Whether a filter is in place

The tool will then construct each specified datagroup in the config, but does not Next() though any data. Each datagroup then generates a line of analytics.

  1. Datagroup name
  2. Datagroup plan
  3. Maximum number of rows able to be generated
  4. Any limit you have imposed
  5. The estimated number of rows to be returned by Next()
  6. The coverage of the raw datagroup population, given the plan and limit
Product Compatible and additional computed target framework versions.
.NET Framework net472 is compatible.  net48 was computed.  net481 was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.0.4 1,080 2/11/2022

Initial release of the package.