ConfIT 2.0.0
dotnet add package ConfIT --version 2.0.0
NuGet\Install-Package ConfIT -Version 2.0.0
<PackageReference Include="ConfIT" Version="2.0.0" />
paket add ConfIT --version 2.0.0
#r "nuget: ConfIT, 2.0.0"
// Install ConfIT as a Cake Addin #addin nuget:?package=ConfIT&version=2.0.0 // Install ConfIT as a Cake Tool #tool nuget:?package=ConfIT&version=2.0.0
ConfIT
Table Of Content
What Is ConfIT
ConfIT
is a library which is taking shape to optimise the Api integration test process. The library equipped with all
necessary bells & whistles required to automate several repeated tasks in the test suite. The library add support to
quickly manage tests through easy and rich DSL
in json
files via declarative way rather than through code everytime.
The library process all json files, where we have defined our tests, convert them into actual test cases and invoke them
against the configured server.
One important point is that ConfIT
is not a test framework like xUnit
or nUnit
but it is a library which can
integrate with these frameworks. Please refer Code Structure section for more details.
Why ConfIT
Let's try to understand at what all test layers, ConfIT can be used.
Type Of Tests
Let's discuss the below diagram for different test stages.
The above flow has three test states. Unit, component and integration tests.
- Unit tests, as we all know, developers write these tests to test a smallest unit of functionality.
- Component tests are to test one whole component without direct dependency on anything outside the component boundary.
- Developers usually write these tests.
- If we have one microservice, e.g.:
Order
service, we are referring whole Order service as a single component. - In this state, we try to mock all
IO
so that the component internal integration can be tested in isolation. - We mock http dependencies, disable not required integration or use in-memory
versions,
so that the component tests will not fail due to any dependency direct behaviour
.
- Integration tests, are the tests which we run in actual QA or CI environments once the code has been deployed.
- QA usually write these tests.
- As we run these tests in actual environment, all tests invoke actual integrations.
If we look component and integration tests, they have several similarities on how we define and execute these tests and that's why we decided to abstract this commonality into a library.
Below are the few initial reasons that why this library was created
- Reduce the gap and increase the usability between component & integration tests
- Enhance test readability
- Test cases are one of the way to dive into functionality and that's why they are very often refer by different team members to understand the business flow and need.
- It is very big process overhead if everyone needs to understand code everytime to understand business flow of the tests, keeping on-boarding also into the account.
- Remove repetitive tasks and increase test process efficiency. Once base is set up, adding/ updating a test should be a few minute job rather than hours or days.
- Reduce tech dept and need for continuous refactoring for test cases and code
- Remove the need to pick different frameworks, language for unit, component and integration tests.
How To Consume ConfIT
Code Structure
Let's first see how code is segregated on the higher level.
- Library code, it contains the
ConfIT
library code.- The library is not tightly coupled with any specific test framework.
- Example code, it contains following parts.
- Code
- User API, a demo service, which has some basic user creation and user retrieval operation.
- JustAnotherService, a demo dependency service, which has some email verify
operations.
- It is added to demonstrate real world service dependencies scenario and how to entertain in our tests.
User Api
usesJustAnotherService
to full fill user creation.
- Test
- Tests are using
xUnit
as the test framework and usingConfIT
library to automate process. - User Component Tests, it contains
User Api
component tests and set up code. - User Integration Tests, it contains
User Api
integration tests and set up code.
- Tests are using
- Code
DSL
Tests is defined in declarative way in json
files. The DSL
supports below attributes.
Request:
- It is the test request which we we want to run on our test server. It has following parameters.
Method
- HTTP method (GET, POST, PUT, PATCH, DELETE)
Path
- Url path
Headers
- Provide details if you want to add headers in the request
-
"headers": {"name":"value"}
Body
- It is request payload, if http method is POST, PUT or PATCH
BodyFromFile
:- Several times, we need to read request payload from file.
- We can provide file name and library will read the content of file and use that as request payload.
Override
:- In a test, when we read content from a file, it might possible that a common file is being used for several tests.
- Each test wants to update just a few fields from that file content are per test requirement, while rest of data should be the same what mentioned in the file.
- If this is the case, we can set those detail in
Override
. Override
will help to reduce number of almost identicalJson
duplicate files, because we can override certain path of the content of the file through this tag.- Library first read content from given file and then merge override details on top of the file content. We can treat it as a two Json object merge operation.
- If
Override
has some existing fields, they will get updated with new values. - If
Override
has new fields, they will get added. -
"override": { "email": "testv2@test.com", "age": 30 }
-
"override": { "child1": { "child2": { "extra": "extra1" } }, "child3": { "child4": { "extra": "extra2" } } }
Response:
- It is the expected API response from the executed request.
Body, BodyFromFile, Override & Headers
has same behaviour as mentioned above in theRequest
section.
Matchers:
- It is another very powerful feature of the library and provide lots of dynamic behaviours to our tests.
- We can define matchers in
response
section. - Due to dynamic nature of the API responses, sometimes we don't sure about the exact values returned form server. In these scenario, we have to match through regex or ignore few fields from assertion. We can define below matchers.
Ignore
:- It is an array of fields name, which we want to ignore form response matching.
Pattern
:- If we don't sure about exact value, but we know about type, range etc, we can set
Regex
for that particular field.
- If we don't sure about exact value, but we know about type, range etc, we can set
- Both
Ignore
andPattern
, supports operations tonth
level fields. For e.g.:- Case 1 :
-
"matcher": {"ignore": ["email"]}
email
field will be searched in all props in an object or in an array or in a multi level object.- If prop name matches, it will be ignored.
- Case 2 :
- When we need to apply to a particular hierarchy rather than at root node.
- Separate parent and child with
__
. -
"matcher": {"ignore": ["child3__child4__pincode"]}
pincode
field will be searched in in theJson
path atchild3.child4
only and only ignorepincode
at this level ofJson
.- If prop name matches, it will be ignored.
-
- Case 1 :
-
"matcher": {"ignore": ["email"],"pattern": {"id": "^(0|[1-9][0-9]?|100)$"} }
-
"matcher": {"ignore": ["child3__child4__pincode","child5"]}
- Go to Refer Section for more implemented examples.
Mocks:
- We have to define mocks in
Compoenent Tests
to replicate behaviours of dependent services. User Api
depends onJustAnotherService
, which means that we have to define mocks for these interactions.- Refer CreateUserCommandHandler, where we
invoke operations on JustAnotherService.
- For demo, to showcase how we can define mocks for different Http calls, we have invoke same operation in three
different ways.
- Get by path , Get by query param & Post
- For demo, to showcase how we can define mocks for different Http calls, we have invoke same operation in three
different ways.
- Refer
mock
key inShouldCreateAUser
test at user.json - We are using WireMock.Net internally to create runtime mock server
of these interactions, whatever we defined in the
mock
section.
- We have to define mocks in
Refer
- Please refer below files. All DSL and set up example implementation has been explained in these files
- Component Tests
- User API
- TestSuiteFixture does all basic initial set up, which later pass to UserComponentTests
- Integration Tests
- User API
- Array UseCases
- Multi Level UseCases
- TestSuiteFixture does all basic initial set up, which later pass to UserIntegrationTests
- Component Tests
- Please refer below files. All DSL and set up example implementation has been explained in these files
Dynamic Test & Data Linking
DSL support several rich features, but there are few scenarios where we need more customization than what DSL is supporting. Let's take a example,
Use Case:
- First Test
- Create a user and response will return newly created user id.
- This id is dynamic and could be any integer depends on the state of server/ database etc.
- Let' suppose, we get user
{id : 51}
- Second Test
- Retrieve newly created user, which we created in first test.
- We need to know exact user id received from server,
51
in this case.
- First Test
Solution:
- Auto save all responses from server on a local folder.
- During
Second Test
run, retrieve user id from the saved response from theFirst Test
and update accordinglySecond Test
url. - How
- To handle, such cases, we can have light implementation for these logics.
- Library expose, two interfaces ITestProcessorFactory and ITestProcessor.
- Refer
- TestProcessor folder for code example.
ShouldReturnUserForGivenId_V1
andShouldReturnUserForGivenId_V2
in users.json
Run
- Component Tests
- Go to
example/User.ComponentTests
and rundotnet test
- Go to
- Integration Tests
- Go to
example/User.Api
and rundotnet run
- Go to
example/JustAnotherService
and rundotnet run
- Go to
example/User.IntegrationTests
and rundotnet test
- Go to
- Filter Tests
- By default, above commands will run all tests. There will be scenarios, where we need to run a few tests by names or by tags.
- Library support adding and consuming tags to the tests and later run specific tests.
- Add Tags
- Add a tags key in the test. It is an array so that test can belongs to a multiple tags.
-
"tags": ["errors", "user"]
- Using Test Name Or Tags
- To filter out tests, we can use TestFilter class to set appropriate options.
- Pass
,
seperated strings toTestFilter.CreateForTags
orTestFilter.CreateForTests
method directly. - ENV Variables
- We can also set env variable with
,
seperated values and then pass key toTestFilter.CreateForTagsFromEnvVariable
orTestFilter.CreateForTestsFromEnvVariable
methods. - You can set these variable the you want, by code or by export For e.g.:
- Tags
-
Environment.SetEnvironmentVariable("RUN_POOLS","errors,anypool"); TestFilter.CreateForTagsFromEnvVariable("RUN_POOLS");
-
- Names
-
Environment.SetEnvironmentVariable("RUN_TESTS","ShouldCreateAUser,ShouldCreateAUser_v1,ShouldCreateAUser_v2"); TestFilter.CreateForTestsFromEnvVariable("RUN_TESTS);
-
- We can also set env variable with
- Refer TestSuiteFixture and UserIntegrationTests for example code.
- Add Tags
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 is compatible. net5.0-windows was computed. net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 is compatible. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. |
-
net5.0
- FluentAssertions (>= 6.12.1)
- JsonDiffPatch.Net (>= 2.3.0)
- Microsoft.AspNetCore.TestHost (>= 5.0.17)
- Newtonsoft.Json (>= 13.0.3)
- WireMock.Net (>= 1.6.3)
-
net6.0
- FluentAssertions (>= 6.12.1)
- JsonDiffPatch.Net (>= 2.3.0)
- Microsoft.AspNetCore.TestHost (>= 6.0.33)
- Newtonsoft.Json (>= 13.0.3)
- WireMock.Net (>= 1.6.3)
-
net7.0
- FluentAssertions (>= 6.12.1)
- JsonDiffPatch.Net (>= 2.3.0)
- Microsoft.AspNetCore.TestHost (>= 7.0.20)
- Newtonsoft.Json (>= 13.0.3)
- WireMock.Net (>= 1.6.3)
-
net8.0
- FluentAssertions (>= 6.12.1)
- JsonDiffPatch.Net (>= 2.3.0)
- Microsoft.AspNetCore.TestHost (>= 8.0.8)
- Newtonsoft.Json (>= 13.0.3)
- WireMock.Net (>= 1.6.3)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.