Git Mailing List Archive mirror
 help / color / mirror / Atom feed
* [PATCH v4] unit tests: Add a project plan document
       [not found] <20230517-unit-tests-v2-v2-0-8c1b50f75811@google.com>
@ 2023-06-30 22:51 ` Josh Steadmon
  2023-07-01  0:42   ` Junio C Hamano
                     ` (7 more replies)
  0 siblings, 8 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-06-30 22:51 UTC (permalink / raw)
  To: git
  Cc: szeder.dev, phillip.wood123, chooglen, avarab, gitster, sandals,
	calvinwan

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
preliminary comparison of several different frameworks.

Coauthored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
Unit tests additionally provide stability to the codebase and can
simplify debugging through isolation. Turning parts of Git into
libraries[1] gives us the ability to run unit tests on the libraries and
to write unit tests in C. Writing unit tests in pure C, rather than with
our current shell/test-tool helper setup, simplifies test setup,
simplifies passing data around (no shell-isms required), and reduces
testing runtime by not spawning a separate process for every test
invocation.

This patch adds a project document describing our goals for adding unit
tests, as well as a discussion of features needed from prospective test
frameworks or harnesses. It also includes a WIP comparison of various
proposed frameworks. Later iterations of this series will probably
include a sample unit test and Makefile integration once we've settled
on a framework. A rendered preview of this doc can be found at [2].

In addition to reviewing the document itself, reviewers can help this
series progress by helping to fill in the framework comparison table.

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc

TODOs remaining:
- List rough priorities across comparison dimensions
- Group dimensions into sensible categories
- Discuss pre-existing harnesses for the current test suite
- Discuss harness vs. framework features, particularly for parallelism
- Figure out how to evaluate frameworks on additional OSes such as *BSD
  and NonStop
- Add more discussion about desired features (particularly mocking)
- Add dimension for test timing
- Evaluate remaining missing comparison table entries

Changes in v4:
- Add link anchors for the framework comparison dimensions
- Explain "Partial" results for each dimension
- Use consistent dimension names in the section headers and comparison
  tables
- Add "Project KLOC", "Adoption", and "Inline tests" dimensions
- Fill in a few of the missing entries in the comparison table

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com

 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 196 +++++++++++++++++++++++++
 2 files changed, 197 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..e302a0e40f
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,196 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+
+== Choosing a framework & harness
+
+=== Desired features
+
+[[tap-support]]
+==== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+==== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[parallel-execution]]
+==== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. "Partial" means that test cases
+are run serially within a single executable, but multiple test executables can
+be run at once (with proper harness support).
+
+[[vendorable-or-ubiquitous]]
+==== Vendorable or ubiquitous
+
+If possible, we want to avoid forcing Git developers to install new tools just
+to run unit tests. So any prospective frameworks and harnesses must either be
+vendorable (meaning, we can copy their source directly into Git's repository),
+or so ubiquitous that it is reasonable to expect that most developers will have
+the tools installed already.
+
+[[maintainable-extensible]]
+==== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+==== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[lazy-test-planning]]
+==== Lazy test planning
+
+TAP supports the notion of _test plans_, which communicate which test cases are
+expected to run, or which tests actually ran. This allows test harnesses to
+detect if the TAP output has been truncated, or if some tests were skipped due
+to errors or bugs.
+
+The test framework should handle creating plans at runtime, rather than
+requiring test developers to manually create plans, which leads to both human-
+and merge-errors.
+
+[[runtime-skippable-tests]]
+==== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[scheduling-re-running]]
+==== Scheduling / re-running
+
+The test harness scheduling should be configurable so that e.g. developers can
+choose to run slow tests first, or to run only tests that failed in a previous
+run.
+
+"True" means that the framework supports both features, "Partial" means it
+supports only one (assuming proper harness support).
+
+[[mock-support]]
+==== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+==== Signal & error handling
+
+The test framework must fail gracefully when test cases are themselves buggy or
+when they are interrupted by signals during runtime.
+
+[[coverage-reports]]
+==== Coverage reports
+
+It may be convenient to generate coverage reports when running unit tests
+(although it may be possible to accomplish this regardless of test framework /
+harness support).
+
+[[project-kloc]]
+==== Project KLOC
+
+WIP: The size of the project, in thousands of lines of code. All else being
+equal, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+==== Adoption
+
+WIP: we prefer a more widely-used project. We'll need to figure out the best way
+to measure this.
+
+[[inline-tests]]
+==== Inline tests
+
+Can the tests live alongside production code in the same source files? This can
+be a useful reminder for developers to add new tests, and keep existing ones
+synced with new changes.
+
+=== Comparison
+
+[format="csv",options="header"]
+|=====
+Framework,"<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<parallel-execution,Parallel execution>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<lazy-test-planning,Lazy test planning>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<scheduling-re-running,Scheduling / re-running>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<coverage-reports,Coverage reports>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>","<<inline-tests,Inline tests>>"
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?,?,?,?
+https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,?,[lime-background]#True#,?,?,?,?,?
+https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,?,?,?,[red-background]#False#,?,?,?,?,?
+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,?,[red-background]#False#,?,?,?,?,?
+https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,?,?,?,[red-background]#False#,?,?,?,?,?
+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?,?,?,?
+https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?,?,?,?
+https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?,?,?,?
+https://nemequ.github.io/munit/[µnit],[red-background]#False#,-,-,-,-,-,-,-,-,-,-,-,-,-,-
+https://github.com/google/cmockery[cmockery],[red-background]#False#,-,-,-,-,-,-,-,-,-,-,-,-,-,-
+https://github.com/lpabon/cmockery2[cmockery2],[red-background]#False#,-,-,-,-,-,-,-,-,-,-,-,-,-,-
+https://github.com/ThrowTheSwitch/Unity[Unity],[red-background]#False#,-,-,-,-,-,-,-,-,-,-,-,-,-,-
+https://github.com/siu/minunit[minunit],[red-background]#False#,-,-,-,-,-,-,-,-,-,-,-,-,-,-
+https://cunit.sourceforge.net/[CUnit],[red-background]#False#,-,-,-,-,-,-,-,-,-,-,-,-,-,-
+|=====
+
+== Milestones
+
+* Settle on final framework
+* Add useful tests of library-like code
+* Integrate with Makefile
+* Integrate with CI
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.41.0.255.g8b1d071c50-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v4] unit tests: Add a project plan document
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
@ 2023-07-01  0:42   ` Junio C Hamano
  2023-07-01  1:03   ` Junio C Hamano
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-07-01  0:42 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: git, szeder.dev, phillip.wood123, chooglen, avarab, sandals,
	calvinwan

Josh Steadmon <steadmon@google.com> writes:

I'll normalize this one to match prevailing use.

> Coauthored-by: Calvin Wan <calvinwan@google.com>

$ git log --since=6.months --pretty=raw --no-merges |
  sed -n -e 's/^    \([^ :]*-by:\).*/\1/p' |
  sort | uniq -c | sort -n | sed -e '/^ *1 /d'
      5 Tested-by:
     15 Suggested-by:
     24 Co-authored-by:
     30 Reported-by:
     34 Reviewed-by:
     38 Helped-by:
     68 Acked-by:
   1786 Signed-off-by:

> Signed-off-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Josh Steadmon <steadmon@google.com>
> ---

Thanks.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v4] unit tests: Add a project plan document
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
  2023-07-01  0:42   ` Junio C Hamano
@ 2023-07-01  1:03   ` Junio C Hamano
  2023-08-07 23:07   ` [PATCH v5] " Josh Steadmon
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-07-01  1:03 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: git, szeder.dev, phillip.wood123, chooglen, avarab, sandals,
	calvinwan

Josh Steadmon <steadmon@google.com> writes:

> In our current testing environment, we spend a significant amount of
> effort crafting end-to-end tests for error conditions that could easily
> be captured by unit tests (or we simply forgo some hard-to-setup and
> rare error conditions). Describe what we hope to accomplish by
> implementing unit tests, and explain some open questions and milestones.
> Discuss desired features for test frameworks/harnesses, and provide a
> preliminary comparison of several different frameworks.
>
> Coauthored-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Josh Steadmon <steadmon@google.com>
> ---

> TODOs remaining:
> - List rough priorities across comparison dimensions
> - Group dimensions into sensible categories
> - Discuss pre-existing harnesses for the current test suite
> - Discuss harness vs. framework features, particularly for parallelism
> - Figure out how to evaluate frameworks on additional OSes such as *BSD
>   and NonStop
> - Add more discussion about desired features (particularly mocking)
> - Add dimension for test timing
> - Evaluate remaining missing comparison table entries

Listing these explicitly here is very much appreciated.  Thanks.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v5] unit tests: Add a project plan document
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
  2023-07-01  0:42   ` Junio C Hamano
  2023-07-01  1:03   ` Junio C Hamano
@ 2023-08-07 23:07   ` Josh Steadmon
  2023-08-14 13:29     ` Phillip Wood
  2023-08-16 23:50   ` [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Josh Steadmon
                     ` (4 subsequent siblings)
  7 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-08-07 23:07 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, chooglen, gitster

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
preliminary comparison of several different frameworks.

Coauthored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This patch adds a project document describing our goals for adding unit
tests, as well as a discussion of features needed from prospective test
frameworks or harnesses. It also includes a WIP comparison of various
proposed frameworks. Later iterations of this series will probably
include a sample unit test and Makefile integration once we've settled
on a framework. A rendered preview of this doc can be found at [2].

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-dev/Documentation/technical/unit-tests.adoc

Reviewers can help this series progress by discussing whether it's
acceptable to rely on `prove` as a test harness for unit tests. We
support this for the current shell tests suite, but it is not strictly
required.

TODOs remaining:
- Discuss pre-existing harnesses for the current test suite
- Figure out how to evaluate frameworks on additional OSes such as *BSD
  and NonStop

Changes in v5:
- Add comparison point "License".
- Discuss feature priorities
- Drop frameworks:
  - Incompatible licenses: libtap, cmocka
  - Missing source: MyTAP
  - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
- Drop comparison point "Coverage reports": this can generally be
  handled by tools such as `gcov` regardless of the framework used.
- Drop comparison point "Inline tests": there didn't seem to be
  strong interest from reviewers for this feature.
- Drop comparison point "Scheduling / re-running": this was not
  supported by any of the main contenders, and is generally better
  handled by the harness rather than framework.
- Drop comparison point "Lazy test planning": this was supported by
  all frameworks that provide TAP output.

Changes in v4:
- Add link anchors for the framework comparison dimensions
- Explain "Partial" results for each dimension
- Use consistent dimension names in the section headers and comparison
  tables
- Add "Project KLOC", "Adoption", and "Inline tests" dimensions
- Fill in a few of the missing entries in the comparison table

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com

 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 217 +++++++++++++++++++++++++
 2 files changed, 218 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..fad9ec9279
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,217 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+For now, we will evaluate projects solely on their framework features. Since we
+are relying on having TAP output (see below), we can assume that any framework
+can be made to work with a harness that we can choose later.
+
+
+== Choosing a framework
+
+=== Desired features & feature priority
+
+There are a variety of features we can use to rank the candidate frameworks, and
+those features have different priorities:
+
+* Critical features: we probably won't consider a framework without these
+** Can we legally / easily use the project?
+*** <<license,License>>
+*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
+*** <<maintainable-extensible,Maintainable / extensible>>
+*** <<major-platform-support,Major platform support>>
+** Does the project support our bare-minimum needs?
+*** <<tap-support,TAP support>>
+*** <<diagnostic-output,Diagnostic output>>
+*** <<runtime-skippable-tests,Runtime-skippable tests>>
+* Nice-to-have features:
+** <<parallel-execution,Parallel execution>>
+** <<mock-support,Mock support>>
+** <<signal-error-handling,Signal & error-handling>>
+* Tie-breaker stats
+** <<project-kloc,Project KLOC>>
+** <<adoption,Adoption>>
+
+[[license]]
+==== License
+
+We must be able to legally use the framework in connection with Git. As Git is
+licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
+projects.
+
+[[vendorable-or-ubiquitous]]
+==== Vendorable or ubiquitous
+
+We want to avoid forcing Git developers to install new tools just to run unit
+tests. Any prospective frameworks and harnesses must either be vendorable
+(meaning, we can copy their source directly into Git's repository), or so
+ubiquitous that it is reasonable to expect that most developers will have the
+tools installed already.
+
+[[maintainable-extensible]]
+==== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+==== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[tap-support]]
+==== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+==== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[runtime-skippable-tests]]
+==== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[parallel-execution]]
+==== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. We assume that executable-level
+parallelism can be handled by the test harness.
+
+[[mock-support]]
+==== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+==== Signal & error handling
+
+The test framework should fail gracefully when test cases are themselves buggy
+or when they are interrupted by signals during runtime.
+
+[[project-kloc]]
+==== Project KLOC
+
+The size of the project, in thousands of lines of code as measured by
+https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
+1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+==== Adoption
+
+As a tie-breaker, we prefer a more widely-used project. We use the number of
+GitHub / GitLab stars to estimate this.
+
+
+=== Comparison
+
+[format="csv",options="header",width="33%"]
+|=====
+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
+https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
+https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
+|=====
+
+==== Alternatives considered
+
+Several suggested frameworks have been eliminated from consideration:
+
+* Incompatible licenses:
+** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
+** https://cmocka.org/[cmocka] (Apache 2.0)
+* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
+* No TAP support:
+** https://nemequ.github.io/munit/[µnit]
+** https://github.com/google/cmockery[cmockery]
+** https://github.com/lpabon/cmockery2[cmockery2]
+** https://github.com/ThrowTheSwitch/Unity[Unity]
+** https://github.com/siu/minunit[minunit]
+** https://cunit.sourceforge.net/[CUnit]
+
+==== Suggested framework
+
+Considering the feature priorities and comparison listed above, a custom
+framework seems to be the best option.
+
+
+== Choosing a test harness
+
+During upstream discussion, it was occasionally noted that `prove` provides many
+convenient features. While we already support the use of `prove` as a test
+harness for the shell tests, it is not strictly required.
+
+IMPORTANT: It is an open question whether or not we wish to rely on `prove` as a
+strict dependency for running unit tests.
+
+
+== Milestones
+
+* Get upstream agreement on implementing a custom test framework
+* Determine if it's OK to rely on `prove` for running unit tests
+* Add useful tests of library-like code
+* Integrate with Makefile
+* Integrate with CI
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.41.0.640.ga95def55d0-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v5] unit tests: Add a project plan document
  2023-08-07 23:07   ` [PATCH v5] " Josh Steadmon
@ 2023-08-14 13:29     ` Phillip Wood
  2023-08-15 22:55       ` Josh Steadmon
  0 siblings, 1 reply; 67+ messages in thread
From: Phillip Wood @ 2023-08-14 13:29 UTC (permalink / raw)
  To: Josh Steadmon, git; +Cc: linusa, calvinwan, chooglen, gitster

Hi Josh

On 08/08/2023 00:07, Josh Steadmon wrote:
> 
> Reviewers can help this series progress by discussing whether it's
> acceptable to rely on `prove` as a test harness for unit tests. We
> support this for the current shell tests suite, but it is not strictly
> required.

If possible it would be good to be able to run individual test programs 
without a harness as we can for our integration tests. For running more 
than one test program in parallel I think it is fine to require a harness.

I don't have a strong preference for which harness we use so long as it 
provides a way to (a) run tests that previously failed tests first and 
(b) run slow tests first. I do have a strong preference for using the 
same harness for both the unit tests and the integration tests so 
developers don't have to learn two different tools. Unless there is a 
problem with prove it would probably make sense just to keep using that 
as the project test harness.

> TODOs remaining:
> - Discuss pre-existing harnesses for the current test suite
> - Figure out how to evaluate frameworks on additional OSes such as *BSD
>    and NonStop

We have .cirrus.yml in tree which I think gitgitgadget uses to run our 
test suite on freebsd so we could leverage that for the unit tests. As 
for NonStop I think we'd just have to rely on Randall running the tests 
as he does now for the integration tests.

> Changes in v5:
> - Add comparison point "License".
> - Discuss feature priorities
> - Drop frameworks:
>    - Incompatible licenses: libtap, cmocka
>    - Missing source: MyTAP
>    - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
> - Drop comparison point "Coverage reports": this can generally be
>    handled by tools such as `gcov` regardless of the framework used.
> - Drop comparison point "Inline tests": there didn't seem to be
>    strong interest from reviewers for this feature.
> - Drop comparison point "Scheduling / re-running": this was not
>    supported by any of the main contenders, and is generally better
>    handled by the harness rather than framework.
> - Drop comparison point "Lazy test planning": this was supported by
>    all frameworks that provide TAP output.

These changes all sound sensible to me

The trimmed down table is most welcome. The custom implementation 
supports partial parallel execution. It shouldn't be too difficult to 
add some signal handling to it depending on what we want it to do.

It sounds like we're getting to the point where we have pinned down our 
requirements and the available alternatives well enough to make a decision.

Best Wishes

Phillip


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v5] unit tests: Add a project plan document
  2023-08-14 13:29     ` Phillip Wood
@ 2023-08-15 22:55       ` Josh Steadmon
  2023-08-17  9:05         ` Phillip Wood
  0 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-08-15 22:55 UTC (permalink / raw)
  To: phillip.wood; +Cc: git, linusa, calvinwan, gitster

On 2023.08.14 14:29, Phillip Wood wrote:
> Hi Josh
> 
> On 08/08/2023 00:07, Josh Steadmon wrote:
> > 
> > Reviewers can help this series progress by discussing whether it's
> > acceptable to rely on `prove` as a test harness for unit tests. We
> > support this for the current shell tests suite, but it is not strictly
> > required.
> 
> If possible it would be good to be able to run individual test programs
> without a harness as we can for our integration tests. For running more than
> one test program in parallel I think it is fine to require a harness.

Sounds good. This is working in v6 which I hope to send to the list
soon.


> I don't have a strong preference for which harness we use so long as it
> provides a way to (a) run tests that previously failed tests first and (b)
> run slow tests first. I do have a strong preference for using the same
> harness for both the unit tests and the integration tests so developers
> don't have to learn two different tools. Unless there is a problem with
> prove it would probably make sense just to keep using that as the project
> test harness.

To be clear, it sounds like both of these can be done with `prove`
(using the various --state settings) without any further support from
our unit tests, right? I see that we do have a "failed" target for
re-running integration tests, but that relies on some test-lib.sh
features that currently have no equivalent in the unit test framework.


> > TODOs remaining:
> > - Discuss pre-existing harnesses for the current test suite
> > - Figure out how to evaluate frameworks on additional OSes such as *BSD
> >    and NonStop
> 
> We have .cirrus.yml in tree which I think gitgitgadget uses to run our test
> suite on freebsd so we could leverage that for the unit tests. As for
> NonStop I think we'd just have to rely on Randall running the tests as he
> does now for the integration tests.

Thanks for the pointer. I've updated .cirrus.yml (as well as the GitHub
CI configs) for v6.


> > Changes in v5:
> > - Add comparison point "License".
> > - Discuss feature priorities
> > - Drop frameworks:
> >    - Incompatible licenses: libtap, cmocka
> >    - Missing source: MyTAP
> >    - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
> > - Drop comparison point "Coverage reports": this can generally be
> >    handled by tools such as `gcov` regardless of the framework used.
> > - Drop comparison point "Inline tests": there didn't seem to be
> >    strong interest from reviewers for this feature.
> > - Drop comparison point "Scheduling / re-running": this was not
> >    supported by any of the main contenders, and is generally better
> >    handled by the harness rather than framework.
> > - Drop comparison point "Lazy test planning": this was supported by
> >    all frameworks that provide TAP output.
> 
> These changes all sound sensible to me
> 
> The trimmed down table is most welcome. The custom implementation supports
> partial parallel execution. It shouldn't be too difficult to add some signal
> handling to it depending on what we want it to do.
> 
> It sounds like we're getting to the point where we have pinned down our
> requirements and the available alternatives well enough to make a decision.

Yes, v6 will include your TAP implementation (I assume you are still OK
if I include your patch in this series?).

> Best Wishes
> 
> Phillip
> 

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
                     ` (2 preceding siblings ...)
  2023-08-07 23:07   ` [PATCH v5] " Josh Steadmon
@ 2023-08-16 23:50   ` Josh Steadmon
  2023-08-16 23:50     ` [PATCH v6 1/3] unit tests: Add a project plan document Josh Steadmon
                       ` (2 more replies)
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
                     ` (3 subsequent siblings)
  7 siblings, 3 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-16 23:50 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This series begins with a project document covering our goals for adding
unit tests and a discussion of alternative frameworks considered, as
well as the features used to evaluate them. A rendered preview of this
doc can be found at [2]. It also adds Phillip Wood's TAP implemenation
(with some slightly re-worked Makefile rules) and a sample strbuf unit
test. Finally, we modify the configs for GitHub and Cirrus CI to run the
unit tests. Sample runs showing successful CI runs can be found at [3],
[4], and [5].

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc
[3] https://github.com/steadmon/git/actions/runs/5884659246/job/15959781385#step:4:1803
[4] https://github.com/steadmon/git/actions/runs/5884659246/job/15959938401#step:5:186
[5] https://cirrus-ci.com/task/6126304366428160 (unrelated tests failed,
    but note that t-strbuf ran successfully)

In addition to reviewing the patches in this series, reviewers can help
this series progress by chiming in on these remaining TODOs:
- Figure out how to ensure tests run on additional OSes such as NonStop
- Figure out if we should collect unit tests statistics similar to the
  "counts" files for shell tests
- Decide if it's OK to wait on sharding unit tests across "sliced" CI
  instances
- Provide guidelines for writing new unit tests

Changes in v6:
- Officially recommend using Phillip Wood's TAP framework
- Add an example strbuf unit test using the TAP framework as well as
  Makefile integration
- Run unit tests in CI

Changes in v5:
- Add comparison point "License".
- Discuss feature priorities
- Drop frameworks:
  - Incompatible licenses: libtap, cmocka
  - Missing source: MyTAP
  - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
- Drop comparison point "Coverage reports": this can generally be
  handled by tools such as `gcov` regardless of the framework used.
- Drop comparison point "Inline tests": there didn't seem to be
  strong interest from reviewers for this feature.
- Drop comparison point "Scheduling / re-running": this was not
  supported by any of the main contenders, and is generally better
  handled by the harness rather than framework.
- Drop comparison point "Lazy test planning": this was supported by
  all frameworks that provide TAP output.

Changes in v4:
- Add link anchors for the framework comparison dimensions
- Explain "Partial" results for each dimension
- Use consistent dimension names in the section headers and comparison
  tables
- Add "Project KLOC", "Adoption", and "Inline tests" dimensions
- Fill in a few of the missing entries in the comparison table

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com


Josh Steadmon (2):
  unit tests: Add a project plan document
  ci: run unit tests in CI

Phillip Wood (1):
  unit tests: add TAP unit test framework

 .cirrus.yml                            |   2 +-
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 220 +++++++++++++++++
 Makefile                               |  24 +-
 ci/run-build-and-tests.sh              |   2 +
 ci/run-test-slice.sh                   |   5 +
 t/Makefile                             |  15 +-
 t/t0080-unit-test-output.sh            |  58 +++++
 t/unit-tests/.gitignore                |   2 +
 t/unit-tests/t-basic.c                 |  95 +++++++
 t/unit-tests/t-strbuf.c                |  75 ++++++
 t/unit-tests/test-lib.c                | 329 +++++++++++++++++++++++++
 t/unit-tests/test-lib.h                | 143 +++++++++++
 13 files changed, 966 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/technical/unit-tests.txt
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

Range-diff against v5:
1:  c7dca1a805 ! 1:  81c5148a12 unit tests: Add a project plan document
    @@ Commit message
         preliminary comparison of several different frameworks.
     
    -    Coauthored-by: Calvin Wan <calvinwan@google.com>
    +    Co-authored-by: Calvin Wan <calvinwan@google.com>
         Signed-off-by: Calvin Wan <calvinwan@google.com>
         Signed-off-by: Josh Steadmon <steadmon@google.com>
     
    @@ Documentation/technical/unit-tests.txt (new)
     +
     +== Choosing a framework
     +
    -+=== Desired features & feature priority
    ++We believe the best option is to implement a custom TAP framework for the Git
    ++project. We use a version of the framework originally proposed in
    ++https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
    ++
    ++
    ++== Choosing a test harness
    ++
    ++During upstream discussion, it was occasionally noted that `prove` provides many
    ++convenient features, such as scheduling slower tests first, or re-running
    ++previously failed tests.
    ++
    ++While we already support the use of `prove` as a test harness for the shell
    ++tests, it is not strictly required. The t/Makefile allows running shell tests
    ++directly (though with interleaved output if parallelism is enabled). Git
    ++developers who wish to use `prove` as a more advanced harness can do so by
    ++setting DEFAULT_TEST_TARGET=prove in their config.mak.
    ++
    ++We will follow a similar approach for unit tests: by default the test
    ++executables will be run directly from the t/Makefile, but `prove` can be
    ++configured with DEFAULT_UNIT_TEST_TARGET=prove.
    ++
    ++
    ++== Framework selection
     +
     +There are a variety of features we can use to rank the candidate frameworks, and
     +those features have different priorities:
    @@ Documentation/technical/unit-tests.txt (new)
     +** <<adoption,Adoption>>
     +
     +[[license]]
    -+==== License
    ++=== License
     +
     +We must be able to legally use the framework in connection with Git. As Git is
     +licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
     +projects.
     +
     +[[vendorable-or-ubiquitous]]
    -+==== Vendorable or ubiquitous
    ++=== Vendorable or ubiquitous
     +
     +We want to avoid forcing Git developers to install new tools just to run unit
     +tests. Any prospective frameworks and harnesses must either be vendorable
    @@ Documentation/technical/unit-tests.txt (new)
     +tools installed already.
     +
     +[[maintainable-extensible]]
    -+==== Maintainable / extensible
    ++=== Maintainable / extensible
     +
     +It is unlikely that any pre-existing project perfectly fits our needs, so any
     +project we select will need to be actively maintained and open to accepting
    @@ Documentation/technical/unit-tests.txt (new)
     +conditions holds.
     +
     +[[major-platform-support]]
    -+==== Major platform support
    ++=== Major platform support
     +
     +At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
     +
    @@ Documentation/technical/unit-tests.txt (new)
     +more platforms, but it is still usable in principle.
     +
     +[[tap-support]]
    -+==== TAP support
    ++=== TAP support
     +
     +The https://testanything.org/[Test Anything Protocol] is a text-based interface
     +that allows tests to communicate with a test harness. It is already used by
    @@ Documentation/technical/unit-tests.txt (new)
     +further.
     +
     +[[diagnostic-output]]
    -+==== Diagnostic output
    ++=== Diagnostic output
     +
     +When a test case fails, the framework must generate enough diagnostic output to
     +help developers find the appropriate test case in source code in order to debug
     +the failure.
     +
     +[[runtime-skippable-tests]]
    -+==== Runtime-skippable tests
    ++=== Runtime-skippable tests
     +
     +Test authors may wish to skip certain test cases based on runtime circumstances,
     +so the framework should support this.
     +
     +[[parallel-execution]]
    -+==== Parallel execution
    ++=== Parallel execution
     +
     +Ideally, we will build up a significant collection of unit test cases, most
     +likely split across multiple executables. It will be necessary to run these
    @@ Documentation/technical/unit-tests.txt (new)
     +parallelism can be handled by the test harness.
     +
     +[[mock-support]]
    -+==== Mock support
    ++=== Mock support
     +
     +Unit test authors may wish to test code that interacts with objects that may be
     +inconvenient to handle in a test (e.g. interacting with a network service).
    @@ Documentation/technical/unit-tests.txt (new)
     +for more convenient tests.
     +
     +[[signal-error-handling]]
    -+==== Signal & error handling
    ++=== Signal & error handling
     +
     +The test framework should fail gracefully when test cases are themselves buggy
     +or when they are interrupted by signals during runtime.
     +
     +[[project-kloc]]
    -+==== Project KLOC
    ++=== Project KLOC
     +
     +The size of the project, in thousands of lines of code as measured by
     +https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
     +1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
     +
     +[[adoption]]
    -+==== Adoption
    ++=== Adoption
     +
     +As a tie-breaker, we prefer a more widely-used project. We use the number of
     +GitHub / GitLab stars to estimate this.
    @@ Documentation/technical/unit-tests.txt (new)
     +https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
     +|=====
     +
    -+==== Alternatives considered
    ++=== Additional framework candidates
     +
     +Several suggested frameworks have been eliminated from consideration:
     +
    @@ Documentation/technical/unit-tests.txt (new)
     +** https://github.com/siu/minunit[minunit]
     +** https://cunit.sourceforge.net/[CUnit]
     +
    -+==== Suggested framework
    -+
    -+Considering the feature priorities and comparison listed above, a custom
    -+framework seems to be the best option.
    -+
    -+
    -+== Choosing a test harness
    -+
    -+During upstream discussion, it was occasionally noted that `prove` provides many
    -+convenient features. While we already support the use of `prove` as a test
    -+harness for the shell tests, it is not strictly required.
    -+
    -+IMPORTANT: It is an open question whether or not we wish to rely on `prove` as a
    -+strict dependency for running unit tests.
    -+
     +
     +== Milestones
     +
    -+* Get upstream agreement on implementing a custom test framework
    -+* Determine if it's OK to rely on `prove` for running unit tests
     +* Add useful tests of library-like code
    -+* Integrate with Makefile
    -+* Integrate with CI
     +* Integrate with
     +  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
     +  work]
-:  ---------- > 2:  ca284c575e unit tests: add TAP unit test framework
-:  ---------- > 3:  ea33518d00 ci: run unit tests in CI

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.41.0.694.ge786442a9b-goog


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v6 1/3] unit tests: Add a project plan document
  2023-08-16 23:50   ` [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Josh Steadmon
@ 2023-08-16 23:50     ` Josh Steadmon
  2023-08-16 23:50     ` [PATCH v6 2/3] unit tests: add TAP unit test framework Josh Steadmon
  2023-08-16 23:50     ` [PATCH v6 3/3] ci: run unit tests in CI Josh Steadmon
  2 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-16 23:50 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
preliminary comparison of several different frameworks.

Co-authored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 220 +++++++++++++++++++++++++
 2 files changed, 221 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..b7a89cc838
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,220 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+For now, we will evaluate projects solely on their framework features. Since we
+are relying on having TAP output (see below), we can assume that any framework
+can be made to work with a harness that we can choose later.
+
+
+== Choosing a framework
+
+We believe the best option is to implement a custom TAP framework for the Git
+project. We use a version of the framework originally proposed in
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
+
+
+== Choosing a test harness
+
+During upstream discussion, it was occasionally noted that `prove` provides many
+convenient features, such as scheduling slower tests first, or re-running
+previously failed tests.
+
+While we already support the use of `prove` as a test harness for the shell
+tests, it is not strictly required. The t/Makefile allows running shell tests
+directly (though with interleaved output if parallelism is enabled). Git
+developers who wish to use `prove` as a more advanced harness can do so by
+setting DEFAULT_TEST_TARGET=prove in their config.mak.
+
+We will follow a similar approach for unit tests: by default the test
+executables will be run directly from the t/Makefile, but `prove` can be
+configured with DEFAULT_UNIT_TEST_TARGET=prove.
+
+
+== Framework selection
+
+There are a variety of features we can use to rank the candidate frameworks, and
+those features have different priorities:
+
+* Critical features: we probably won't consider a framework without these
+** Can we legally / easily use the project?
+*** <<license,License>>
+*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
+*** <<maintainable-extensible,Maintainable / extensible>>
+*** <<major-platform-support,Major platform support>>
+** Does the project support our bare-minimum needs?
+*** <<tap-support,TAP support>>
+*** <<diagnostic-output,Diagnostic output>>
+*** <<runtime-skippable-tests,Runtime-skippable tests>>
+* Nice-to-have features:
+** <<parallel-execution,Parallel execution>>
+** <<mock-support,Mock support>>
+** <<signal-error-handling,Signal & error-handling>>
+* Tie-breaker stats
+** <<project-kloc,Project KLOC>>
+** <<adoption,Adoption>>
+
+[[license]]
+=== License
+
+We must be able to legally use the framework in connection with Git. As Git is
+licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
+projects.
+
+[[vendorable-or-ubiquitous]]
+=== Vendorable or ubiquitous
+
+We want to avoid forcing Git developers to install new tools just to run unit
+tests. Any prospective frameworks and harnesses must either be vendorable
+(meaning, we can copy their source directly into Git's repository), or so
+ubiquitous that it is reasonable to expect that most developers will have the
+tools installed already.
+
+[[maintainable-extensible]]
+=== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+=== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[tap-support]]
+=== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+=== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[runtime-skippable-tests]]
+=== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[parallel-execution]]
+=== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. We assume that executable-level
+parallelism can be handled by the test harness.
+
+[[mock-support]]
+=== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+=== Signal & error handling
+
+The test framework should fail gracefully when test cases are themselves buggy
+or when they are interrupted by signals during runtime.
+
+[[project-kloc]]
+=== Project KLOC
+
+The size of the project, in thousands of lines of code as measured by
+https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
+1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+=== Adoption
+
+As a tie-breaker, we prefer a more widely-used project. We use the number of
+GitHub / GitLab stars to estimate this.
+
+
+=== Comparison
+
+[format="csv",options="header",width="33%"]
+|=====
+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
+https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
+https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
+|=====
+
+=== Additional framework candidates
+
+Several suggested frameworks have been eliminated from consideration:
+
+* Incompatible licenses:
+** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
+** https://cmocka.org/[cmocka] (Apache 2.0)
+* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
+* No TAP support:
+** https://nemequ.github.io/munit/[µnit]
+** https://github.com/google/cmockery[cmockery]
+** https://github.com/lpabon/cmockery2[cmockery2]
+** https://github.com/ThrowTheSwitch/Unity[Unity]
+** https://github.com/siu/minunit[minunit]
+** https://cunit.sourceforge.net/[CUnit]
+
+
+== Milestones
+
+* Add useful tests of library-like code
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target
-- 
2.41.0.694.ge786442a9b-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 2/3] unit tests: add TAP unit test framework
  2023-08-16 23:50   ` [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Josh Steadmon
  2023-08-16 23:50     ` [PATCH v6 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-08-16 23:50     ` Josh Steadmon
  2023-08-17  0:12       ` Junio C Hamano
  2023-08-16 23:50     ` [PATCH v6 3/3] ci: run unit tests in CI Josh Steadmon
  2 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-08-16 23:50 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

From: Phillip Wood <phillip.wood@dunelm.org.uk>

This patch contains an implementation for writing unit tests with TAP
output. Each test is a function that contains one or more checks. The
test is run with the TEST() macro and if any of the checks fail then the
test will fail. A complete program that tests STRBUF_INIT would look
like

     #include "test-lib.h"
     #include "strbuf.h"

     static void t_static_init(void)
     {
             struct strbuf buf = STRBUF_INIT;

             check_uint(buf.len, ==, 0);
             check_uint(buf.alloc, ==, 0);
             if (check(buf.buf == strbuf_slopbuf))
		    return; /* avoid SIGSEV */
             check_char(buf.buf[0], ==, '\0');
     }

     int main(void)
     {
             TEST(t_static_init(), "static initialization works);

             return test_done();
     }

The output of this program would be

     ok 1 - static initialization works
     1..1

If any of the checks in a test fail then they print a diagnostic message
to aid debugging and the test will be reported as failing. For example a
failing integer check would look like

     # check "x >= 3" failed at my-test.c:102
     #    left: 2
     #   right: 3
     not ok 1 - x is greater than or equal to three

There are a number of check functions implemented so far. check() checks
a boolean condition, check_int(), check_uint() and check_char() take two
values to compare and a comparison operator. check_str() will check if
two strings are equal. Custom checks are simple to implement as shown in
the comments above test_assert() in test-lib.h.

Tests can be skipped with test_skip() which can be supplied with a
reason for skipping which it will print. Tests can print diagnostic
messages with test_msg().  Checks that are known to fail can be wrapped
in TEST_TODO().

There are a couple of example test programs included in this
patch. t-basic.c implements some self-tests and demonstrates the
diagnostic output for failing test. The output of this program is
checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
unit tests for strbuf.c

The unit tests can be built with "make unit-tests" (this works but the
Makefile changes need some further work). Once they have been built they
can be run manually (e.g t/unit-tests/t-strbuf) or with prove.

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Josh Steadmon <steadmon@google.com>

diff --git a/Makefile b/Makefile
index e440728c24..4016da6e39 100644

--- a/Makefile
+++ b/Makefile
@@ -682,6 +682,8 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests

 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1331,6 +1333,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
 THIRD_PARTY_SOURCES += sha1collisiondetection/%
 THIRD_PARTY_SOURCES += sha1dc/%

+UNIT_TEST_PROGRAMS += t-basic
+UNIT_TEST_PROGRAMS += t-strbuf
+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_DIR)/%$X,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
+
 # xdiff and reftable libs may in turn depend on what is in libgit.a
 GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
 EXTLIBS =
@@ -2672,6 +2680,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(UNIT_TEST_OBJS)

 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3167,7 +3176,7 @@ endif

 test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))

-all:: $(TEST_PROGRAMS) $(test_bindir_programs)
+all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)

 bin-wrappers/%: wrap-for-bin.sh
 	$(call mkdir_p_parent_template)
@@ -3592,7 +3601,7 @@ endif

 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
-		$(MOFILES)
+		$(UNIT_TEST_PROGS) $(MOFILES)
 	$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
 		SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
 	test -n "$(ARTIFACTS_DIRECTORY)"
@@ -3653,7 +3662,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3831,3 +3840,12 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@

 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_PROGS): $(UNIT_TEST_DIR)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS
+	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGS)
+unit-tests: $(UNIT_TEST_PROGS)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..92864cdf28 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -41,6 +41,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))

 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +66,13 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)

+$(UNIT_TESTS):
+	@echo "*** $@ ***"; $@
+
+.PHONY: unit-tests
+unit-tests:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'

@@ -149,4 +157,4 @@ perf:
 	$(MAKE) -C perf/ all

 .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
-	check-chainlint clean-chainlint test-chainlint
+	check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
new file mode 100755
index 0000000000..c60e402260
--- /dev/null
+++ b/t/t0080-unit-test-output.sh
@@ -0,0 +1,58 @@
+#!/bin/sh
+
+test_description='Test the output of the unit test framework'
+
+. ./test-lib.sh
+
+test_expect_success 'TAP output from unit tests' '
+	cat >expect <<-EOF &&
+	ok 1 - passing test
+	ok 2 - passing test and assertion return 0
+	# check "1 == 2" failed at t/unit-tests/t-basic.c:68
+	#    left: 1
+	#   right: 2
+	not ok 3 - failing test
+	ok 4 - failing test and assertion return -1
+	not ok 5 - passing TEST_TODO() # TODO
+	ok 6 - passing TEST_TODO() returns 0
+	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:17
+	not ok 7 - failing TEST_TODO()
+	ok 8 - failing TEST_TODO() returns -1
+	# check "0" failed at t/unit-tests/t-basic.c:22
+	# skipping test - missing prerequisite
+	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:24
+	ok 9 - test_skip() # SKIP
+	ok 10 - skipped test returns 0
+	# skipping test - missing prerequisite
+	ok 11 - test_skip() inside TEST_TODO() # SKIP
+	ok 12 - test_skip() inside TEST_TODO() returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:40
+	not ok 13 - TEST_TODO() after failing check
+	ok 14 - TEST_TODO() after failing check returns -1
+	# check "0" failed at t/unit-tests/t-basic.c:48
+	not ok 15 - failing check after TEST_TODO()
+	ok 16 - failing check after TEST_TODO() returns -1
+	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:53
+	#    left: "\011hello\\\\"
+	#   right: "there\"\012"
+	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:54
+	#    left: "NULL"
+	#   right: NULL
+	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:55
+	#    left: ${SQ}a${SQ}
+	#   right: ${SQ}\012${SQ}
+	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:56
+	#    left: ${SQ}\\\\${SQ}
+	#   right: ${SQ}\\${SQ}${SQ}
+	not ok 17 - messages from failing string and char comparison
+	# BUG: test has no checks at t/unit-tests/t-basic.c:83
+	not ok 18 - test with no checks
+	ok 19 - test with no checks returns -1
+	1..19
+	EOF
+
+	! "$GIT_BUILD_DIR"/t/unit-tests/t-basic >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
new file mode 100644
index 0000000000..e292d58348
--- /dev/null
+++ b/t/unit-tests/.gitignore
@@ -0,0 +1,2 @@
+/t-basic
+/t-strbuf
diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
new file mode 100644
index 0000000000..ab0b7682c4
--- /dev/null
+++ b/t/unit-tests/t-basic.c
@@ -0,0 +1,87 @@
+#include "test-lib.h"
+
+/* Used to store the return value of check_int(). */
+static int check_res;
+
+/* Used to store the return value of TEST(). */
+static int test_res;
+
+static void t_res(int expect)
+{
+	check_int(check_res, ==, expect);
+	check_int(test_res, ==, expect);
+}
+
+static void t_todo(int x)
+{
+	check_res = TEST_TODO(check(x));
+}
+
+static void t_skip(void)
+{
+	check(0);
+	test_skip("missing prerequisite");
+	check(1);
+}
+
+static int do_skip(void)
+{
+	test_skip("missing prerequisite");
+	return 0;
+}
+
+static void t_skip_todo(void)
+{
+	check_res = TEST_TODO(do_skip());
+}
+
+static void t_todo_after_fail(void)
+{
+	check(0);
+	TEST_TODO(check(0));
+}
+
+static void t_fail_after_todo(void)
+{
+	check(1);
+	TEST_TODO(check(0));
+	check(0);
+}
+
+static void t_messages(void)
+{
+	check_str("\thello\\", "there\"\n");
+	check_str("NULL", NULL);
+	check_char('a', ==, '\n');
+	check_char('\\', ==, '\'');
+}
+
+static void t_empty(void)
+{
+	; /* empty */
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
+	TEST(t_res(0), "passing test and assertion return 0");
+	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
+	TEST(t_res(-1), "failing test and assertion return -1");
+	test_res = TEST(t_todo(0), "passing TEST_TODO()");
+	TEST(t_res(0), "passing TEST_TODO() returns 0");
+	test_res = TEST(t_todo(1), "failing TEST_TODO()");
+	TEST(t_res(-1), "failing TEST_TODO() returns -1");
+	test_res = TEST(t_skip(), "test_skip()");
+	TEST(check_int(test_res, ==, 0), "skipped test returns 0");
+	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
+	TEST(t_res(0), "test_skip() inside TEST_TODO() returns 0");
+	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
+	TEST(check_int(test_res, ==, -1), "TEST_TODO() after failing check returns -1");
+	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
+	TEST(check_int(test_res, ==, -1), "failing check after TEST_TODO() returns -1");
+	TEST(t_messages(), "messages from failing string and char comparison");
+	test_res = TEST(t_empty(), "test with no checks");
+	TEST(check_int(test_res, ==, -1), "test with no checks returns -1");
+
+	return test_done();
+}
diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
new file mode 100644
index 0000000000..561611e242
--- /dev/null
+++ b/t/unit-tests/t-strbuf.c
@@ -0,0 +1,75 @@
+#include "test-lib.h"
+#include "strbuf.h"
+
+/* wrapper that supplies tests with an initialized strbuf */
+static void setup(void (*f)(struct strbuf*, void*), void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	check(buf.buf == strbuf_slopbuf);
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_static_init(void)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	if (check(buf.buf == strbuf_slopbuf))
+		return; /* avoid de-referencing buf.buf */
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_dynamic_init(void)
+{
+	struct strbuf buf;
+
+	strbuf_init(&buf, 1024);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, >=, 1024);
+	check_char(buf.buf[0], ==, '\0');
+	strbuf_release(&buf);
+}
+
+static void t_addch(struct strbuf *buf, void *data)
+{
+	const char *p_ch = data;
+	const char ch = *p_ch;
+
+	strbuf_addch(buf, ch);
+	if (check_uint(buf->len, ==, 1) ||
+	    check_uint(buf->alloc, >, 1))
+		return; /* avoid de-referencing buf->buf */
+	check_char(buf->buf[0], ==, ch);
+	check_char(buf->buf[1], ==, '\0');
+}
+
+static void t_addstr(struct strbuf *buf, void *data)
+{
+	const char *text = data;
+	size_t len = strlen(text);
+
+	strbuf_addstr(buf, text);
+	if (check_uint(buf->len, ==, len) ||
+	    check_uint(buf->alloc, >, len) ||
+	    check_char(buf->buf[len], ==, '\0'))
+	    return;
+	check_str(buf->buf, text);
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	if (TEST(t_static_init(), "static initialization works"))
+		test_skip_all("STRBUF_INIT is broken");
+	TEST(t_dynamic_init(), "dynamic initialization works");
+	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
+	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
+	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
+
+	return test_done();
+}
diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
new file mode 100644
index 0000000000..70030d587f
--- /dev/null
+++ b/t/unit-tests/test-lib.c
@@ -0,0 +1,329 @@
+#include "test-lib.h"
+
+enum result {
+	RESULT_NONE,
+	RESULT_FAILURE,
+	RESULT_SKIP,
+	RESULT_SUCCESS,
+	RESULT_TODO
+};
+
+static struct {
+	enum result result;
+	int count;
+	unsigned failed :1;
+	unsigned lazy_plan :1;
+	unsigned running :1;
+	unsigned skip_all :1;
+	unsigned todo :1;
+} ctx = {
+	.lazy_plan = 1,
+	.result = RESULT_NONE,
+};
+
+static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
+{
+	fflush(stderr);
+	if (prefix)
+		fprintf(stdout, "%s", prefix);
+	vprintf(format, ap); /* TODO: handle newlines */
+	putc('\n', stdout);
+	fflush(stdout);
+}
+
+void test_msg(const char *format, ...)
+{
+	va_list ap;
+
+	va_start(ap, format);
+	msg_with_prefix("# ", format, ap);
+	va_end(ap);
+}
+
+void test_plan(int count)
+{
+	assert(!ctx.running);
+
+	fflush(stderr);
+	printf("1..%d\n", count);
+	fflush(stdout);
+	ctx.lazy_plan = 0;
+}
+
+int test_done(void)
+{
+	assert(!ctx.running);
+
+	if (ctx.lazy_plan)
+		test_plan(ctx.count);
+
+	return ctx.failed;
+}
+
+void test_skip(const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	if (format)
+		msg_with_prefix("# skipping test - ", format, ap);
+	va_end(ap);
+}
+
+void test_skip_all(const char *format, ...)
+{
+	va_list ap;
+	const char *prefix;
+
+	if (!ctx.count && ctx.lazy_plan) {
+		/* We have not printed a test plan yet */
+		prefix = "1..0 # SKIP ";
+		ctx.lazy_plan = 0;
+	} else {
+		/* We have already printed a test plan */
+		prefix = "Bail out! # ";
+		ctx.failed = 1;
+	}
+	ctx.skip_all = 1;
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	msg_with_prefix(prefix, format, ap);
+	va_end(ap);
+}
+
+int test__run_begin(void)
+{
+	assert(!ctx.running);
+
+	ctx.count++;
+	ctx.result = RESULT_NONE;
+	ctx.running = 1;
+
+	return ctx.skip_all;
+}
+
+static void print_description(const char *format, va_list ap)
+{
+	if (format) {
+		fputs(" - ", stdout);
+		vprintf(format, ap);
+	}
+}
+
+int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	fflush(stderr);
+	va_start(ap, format);
+	if (!ctx.skip_all) {
+		switch (ctx.result) {
+		case RESULT_SUCCESS:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_FAILURE:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_TODO:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # TODO");
+			break;
+
+		case RESULT_SKIP:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # SKIP");
+			break;
+
+		case RESULT_NONE:
+			test_msg("BUG: test has no checks at %s", location);
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			ctx.result = RESULT_FAILURE;
+			break;
+		}
+	}
+	va_end(ap);
+	ctx.running = 0;
+	if (ctx.skip_all)
+		return 0;
+	putc('\n', stdout);
+	fflush(stdout);
+	ctx.failed |= ctx.result == RESULT_FAILURE;
+
+	return -(ctx.result == RESULT_FAILURE);
+}
+
+static void test_fail(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	ctx.result = RESULT_FAILURE;
+}
+
+static void test_pass(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result == RESULT_NONE)
+		ctx.result = RESULT_SUCCESS;
+}
+
+static void test_todo(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result != RESULT_FAILURE)
+		ctx.result = RESULT_TODO;
+}
+
+int test_assert(const char *location, const char *check, int ok)
+{
+	assert(ctx.running);
+
+	if (ctx.result == RESULT_SKIP) {
+		test_msg("skipping check '%s' at %s", check, location);
+		return 0;
+	} else if (!ctx.todo) {
+		if (ok) {
+			test_pass();
+		} else {
+			test_msg("check \"%s\" failed at %s", check, location);
+			test_fail();
+		}
+	}
+
+	return -!ok;
+}
+
+void test__todo_begin(void)
+{
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	ctx.todo = 1;
+}
+
+int test__todo_end(const char *location, const char *check, int res)
+{
+	assert(ctx.running);
+	assert(ctx.todo);
+
+	ctx.todo = 0;
+	if (ctx.result == RESULT_SKIP)
+		return 0;
+	if (!res) {
+		test_msg("todo check '%s' succeeded at %s", check, location);
+		test_fail();
+	} else {
+		test_todo();
+	}
+
+	return -!res;
+}
+
+int check_bool_loc(const char *loc, const char *check, int ok)
+{
+	return test_assert(loc, check, ok);
+}
+
+union test__tmp test__tmp[2];
+
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		test_msg("   left: %"PRIdMAX, a);
+		test_msg("  right: %"PRIdMAX, b);
+	}
+
+	return ret;
+}
+
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		test_msg("   left: %"PRIuMAX, a);
+		test_msg("  right: %"PRIuMAX, b);
+	}
+
+	return ret;
+}
+
+static void print_one_char(char ch, char quote)
+{
+	if ((unsigned char)ch < 0x20u || ch == 0x7f) {
+		/* TODO: improve handling of \a, \b, \f ... */
+		printf("\\%03o", (unsigned char)ch);
+	} else {
+		if (ch == '\\' || ch == quote)
+			putc('\\', stdout);
+		putc(ch, stdout);
+	}
+}
+
+static void print_char(const char *prefix, char ch)
+{
+	printf("# %s: '", prefix);
+	print_one_char(ch, '\'');
+	fputs("'\n", stdout);
+}
+
+int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		fflush(stderr);
+		print_char("   left", a);
+		print_char("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
+
+static void print_str(const char *prefix, const char *str)
+{
+	printf("# %s: ", prefix);
+	if (!str) {
+		fputs("NULL\n", stdout);
+	} else {
+		putc('"', stdout);
+		while (*str)
+			print_one_char(*str++, '"');
+		fputs("\"\n", stdout);
+	}
+}
+
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b)
+{
+	int ok = (!a && !b) || (a && b && !strcmp(a, b));
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		fflush(stderr);
+		print_str("   left", a);
+		print_str("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
new file mode 100644
index 0000000000..720c97c6f8
--- /dev/null
+++ b/t/unit-tests/test-lib.h
@@ -0,0 +1,143 @@
+#ifndef TEST_LIB_H
+#define TEST_LIB_H
+
+#include "git-compat-util.h"
+
+/*
+ * Run a test function, returns 0 if the test succeeds, -1 if it
+ * fails. If test_skip_all() has been called then the test will not be
+ * run. The description for each test should be unique. For example:
+ *
+ *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
+ */
+#define TEST(t, ...)					\
+	test__run_end(test__run_begin() ? 0 : (t, 1),	\
+		      TEST_LOCATION(),  __VA_ARGS__)
+
+/*
+ * Print a test plan, should be called before any tests. If the number
+ * of tests is not known in advance test_done() will automatically
+ * print a plan at the end of the test program.
+ */
+void test_plan(int count);
+
+/*
+ * test_done() must be called at the end of main(). It will print the
+ * plan if plan() was not called at the beginning of the test program
+ * and returns the exit code for the test program.
+ */
+int test_done(void);
+
+/* Skip the current test. */
+__attribute__((format (printf, 1, 2)))
+void test_skip(const char *format, ...);
+
+/* Skip all remaining tests. */
+__attribute__((format (printf, 1, 2)))
+void test_skip_all(const char *format, ...);
+
+/* Print a diagnostic message to stdout. */
+__attribute__((format (printf, 1, 2)))
+void test_msg(const char *format, ...);
+
+/*
+ * Test checks are built around test_assert(). checks return 0 on
+ * success, -1 on failure. If any check fails then the test will
+ * fail. To create a custom check define a function that wraps
+ * test_assert() and a macro to wrap that function. For example:
+ *
+ *  static int check_oid_loc(const char *loc, const char *check,
+ *			     struct object_id *a, struct object_id *b)
+ *  {
+ *	    int res = test_assert(loc, check, oideq(a, b));
+ *
+ *	    if (res) {
+ *		    test_msg("   left: %s", oid_to_hex(a);
+ *		    test_msg("  right: %s", oid_to_hex(a);
+ *
+ *	    }
+ *	    return res;
+ *  }
+ *
+ *  #define check_oid(a, b) \
+ *	    check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
+ */
+int test_assert(const char *location, const char *check, int ok);
+
+/* Helper macro to pass the location to checks */
+#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
+
+/* Check a boolean condition. */
+#define check(x)				\
+	check_bool_loc(TEST_LOCATION(), #x, x)
+int check_bool_loc(const char *loc, const char *check, int ok);
+
+/*
+ * Compare two integers. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_int(a, op, b)						\
+	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
+	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+		       test__tmp[0].i op test__tmp[1].i, a, b))
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b);
+
+/*
+ * Compare two unsigned integers. Prints a message with the two values
+ * if the comparison fails. NB this is not thread safe.
+ */
+#define check_uint(a, op, b)						\
+	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
+	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].u op test__tmp[1].u, a, b))
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b);
+
+/*
+ * Compare two chars. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_char(a, op, b)						\
+	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
+	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].c op test__tmp[1].c, a, b))
+int check_char_loc(const char *loc, const char *check, int ok,
+		   char a, char b);
+
+/* Check whether two strings are equal. */
+#define check_str(a, b)							\
+	check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b);
+
+/*
+ * Wrap a check that is known to fail. If the check succeeds then the
+ * test will fail. Returns 0 if the check fails, -1 if it
+ * succeeds. For example:
+ *
+ *  TEST_TODO(check(0));
+ */
+#define TEST_TODO(check) \
+	(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
+
+/* Private helpers */
+
+#define TEST__STR(x) #x
+#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
+
+union test__tmp {
+	intmax_t i;
+	uintmax_t u;
+	char c;
+};
+
+extern union test__tmp test__tmp[2];
+
+int test__run_begin(void);
+__attribute__((format (printf, 3, 4)))
+int test__run_end(int, const char *, const char *, ...);
+void test__todo_begin(void);
+int test__todo_end(const char *, const char *, int);
+
+#endif /* TEST_LIB_H */

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Makefile                    |  24 ++-
 t/Makefile                  |  15 +-
 t/t0080-unit-test-output.sh |  58 +++++++
 t/unit-tests/.gitignore     |   2 +
 t/unit-tests/t-basic.c      |  95 +++++++++++
 t/unit-tests/t-strbuf.c     |  75 ++++++++
 t/unit-tests/test-lib.c     | 329 ++++++++++++++++++++++++++++++++++++
 t/unit-tests/test-lib.h     | 143 ++++++++++++++++
 8 files changed, 737 insertions(+), 4 deletions(-)
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

diff --git a/Makefile b/Makefile
index e440728c24..4016da6e39 100644
--- a/Makefile
+++ b/Makefile
@@ -682,6 +682,8 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests
 
 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1331,6 +1333,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
 THIRD_PARTY_SOURCES += sha1collisiondetection/%
 THIRD_PARTY_SOURCES += sha1dc/%
 
+UNIT_TEST_PROGRAMS += t-basic
+UNIT_TEST_PROGRAMS += t-strbuf
+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_DIR)/%$X,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
+
 # xdiff and reftable libs may in turn depend on what is in libgit.a
 GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
 EXTLIBS =
@@ -2672,6 +2680,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(UNIT_TEST_OBJS)
 
 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3167,7 +3176,7 @@ endif
 
 test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
 
-all:: $(TEST_PROGRAMS) $(test_bindir_programs)
+all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
 
 bin-wrappers/%: wrap-for-bin.sh
 	$(call mkdir_p_parent_template)
@@ -3592,7 +3601,7 @@ endif
 
 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
-		$(MOFILES)
+		$(UNIT_TEST_PROGS) $(MOFILES)
 	$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
 		SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
 	test -n "$(ARTIFACTS_DIRECTORY)"
@@ -3653,7 +3662,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3831,3 +3840,12 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
 
 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_PROGS): $(UNIT_TEST_DIR)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS
+	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGS)
+unit-tests: $(UNIT_TEST_PROGS)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..2db8b3adb1 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -17,6 +17,7 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+DEFAULT_UNIT_TEST_TARGET ?= unit-tests-raw
 TEST_LINT ?= test-lint
 
 ifdef TEST_OUTPUT_DIRECTORY
@@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +67,17 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
 
+$(UNIT_TESTS):
+	@echo "*** $@ ***"; $@
+
+.PHONY: unit-tests unit-tests-raw unit-tests-prove
+unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
+
+unit-tests-raw: $(UNIT_TESTS)
+
+unit-tests-prove:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
@@ -149,4 +162,4 @@ perf:
 	$(MAKE) -C perf/ all
 
 .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
-	check-chainlint clean-chainlint test-chainlint
+	check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
new file mode 100755
index 0000000000..e0fc07d1e5
--- /dev/null
+++ b/t/t0080-unit-test-output.sh
@@ -0,0 +1,58 @@
+#!/bin/sh
+
+test_description='Test the output of the unit test framework'
+
+. ./test-lib.sh
+
+test_expect_success 'TAP output from unit tests' '
+	cat >expect <<-EOF &&
+	ok 1 - passing test
+	ok 2 - passing test and assertion return 0
+	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
+	#    left: 1
+	#   right: 2
+	not ok 3 - failing test
+	ok 4 - failing test and assertion return -1
+	not ok 5 - passing TEST_TODO() # TODO
+	ok 6 - passing TEST_TODO() returns 0
+	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
+	not ok 7 - failing TEST_TODO()
+	ok 8 - failing TEST_TODO() returns -1
+	# check "0" failed at t/unit-tests/t-basic.c:30
+	# skipping test - missing prerequisite
+	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
+	ok 9 - test_skip() # SKIP
+	ok 10 - skipped test returns 0
+	# skipping test - missing prerequisite
+	ok 11 - test_skip() inside TEST_TODO() # SKIP
+	ok 12 - test_skip() inside TEST_TODO() returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:48
+	not ok 13 - TEST_TODO() after failing check
+	ok 14 - TEST_TODO() after failing check returns -1
+	# check "0" failed at t/unit-tests/t-basic.c:56
+	not ok 15 - failing check after TEST_TODO()
+	ok 16 - failing check after TEST_TODO() returns -1
+	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
+	#    left: "\011hello\\\\"
+	#   right: "there\"\012"
+	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
+	#    left: "NULL"
+	#   right: NULL
+	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
+	#    left: ${SQ}a${SQ}
+	#   right: ${SQ}\012${SQ}
+	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
+	#    left: ${SQ}\\\\${SQ}
+	#   right: ${SQ}\\${SQ}${SQ}
+	not ok 17 - messages from failing string and char comparison
+	# BUG: test has no checks at t/unit-tests/t-basic.c:91
+	not ok 18 - test with no checks
+	ok 19 - test with no checks returns -1
+	1..19
+	EOF
+
+	! "$GIT_BUILD_DIR"/t/unit-tests/t-basic >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
new file mode 100644
index 0000000000..e292d58348
--- /dev/null
+++ b/t/unit-tests/.gitignore
@@ -0,0 +1,2 @@
+/t-basic
+/t-strbuf
diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
new file mode 100644
index 0000000000..d20f444fab
--- /dev/null
+++ b/t/unit-tests/t-basic.c
@@ -0,0 +1,95 @@
+#include "test-lib.h"
+
+/*
+ * The purpose of this "unit test" is to verify a few invariants of the unit
+ * test framework itself, as well as to provide examples of output from actually
+ * failing tests. As such, it is intended that this test fails, and thus it
+ * should not be run as part of `make unit-tests`. Instead, we verify it behaves
+ * as expected in the integration test t0080-unit-test-output.sh
+ */
+
+/* Used to store the return value of check_int(). */
+static int check_res;
+
+/* Used to store the return value of TEST(). */
+static int test_res;
+
+static void t_res(int expect)
+{
+	check_int(check_res, ==, expect);
+	check_int(test_res, ==, expect);
+}
+
+static void t_todo(int x)
+{
+	check_res = TEST_TODO(check(x));
+}
+
+static void t_skip(void)
+{
+	check(0);
+	test_skip("missing prerequisite");
+	check(1);
+}
+
+static int do_skip(void)
+{
+	test_skip("missing prerequisite");
+	return 0;
+}
+
+static void t_skip_todo(void)
+{
+	check_res = TEST_TODO(do_skip());
+}
+
+static void t_todo_after_fail(void)
+{
+	check(0);
+	TEST_TODO(check(0));
+}
+
+static void t_fail_after_todo(void)
+{
+	check(1);
+	TEST_TODO(check(0));
+	check(0);
+}
+
+static void t_messages(void)
+{
+	check_str("\thello\\", "there\"\n");
+	check_str("NULL", NULL);
+	check_char('a', ==, '\n');
+	check_char('\\', ==, '\'');
+}
+
+static void t_empty(void)
+{
+	; /* empty */
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
+	TEST(t_res(0), "passing test and assertion return 0");
+	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
+	TEST(t_res(-1), "failing test and assertion return -1");
+	test_res = TEST(t_todo(0), "passing TEST_TODO()");
+	TEST(t_res(0), "passing TEST_TODO() returns 0");
+	test_res = TEST(t_todo(1), "failing TEST_TODO()");
+	TEST(t_res(-1), "failing TEST_TODO() returns -1");
+	test_res = TEST(t_skip(), "test_skip()");
+	TEST(check_int(test_res, ==, 0), "skipped test returns 0");
+	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
+	TEST(t_res(0), "test_skip() inside TEST_TODO() returns 0");
+	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
+	TEST(check_int(test_res, ==, -1), "TEST_TODO() after failing check returns -1");
+	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
+	TEST(check_int(test_res, ==, -1), "failing check after TEST_TODO() returns -1");
+	TEST(t_messages(), "messages from failing string and char comparison");
+	test_res = TEST(t_empty(), "test with no checks");
+	TEST(check_int(test_res, ==, -1), "test with no checks returns -1");
+
+	return test_done();
+}
diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
new file mode 100644
index 0000000000..561611e242
--- /dev/null
+++ b/t/unit-tests/t-strbuf.c
@@ -0,0 +1,75 @@
+#include "test-lib.h"
+#include "strbuf.h"
+
+/* wrapper that supplies tests with an initialized strbuf */
+static void setup(void (*f)(struct strbuf*, void*), void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	check(buf.buf == strbuf_slopbuf);
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_static_init(void)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	if (check(buf.buf == strbuf_slopbuf))
+		return; /* avoid de-referencing buf.buf */
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_dynamic_init(void)
+{
+	struct strbuf buf;
+
+	strbuf_init(&buf, 1024);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, >=, 1024);
+	check_char(buf.buf[0], ==, '\0');
+	strbuf_release(&buf);
+}
+
+static void t_addch(struct strbuf *buf, void *data)
+{
+	const char *p_ch = data;
+	const char ch = *p_ch;
+
+	strbuf_addch(buf, ch);
+	if (check_uint(buf->len, ==, 1) ||
+	    check_uint(buf->alloc, >, 1))
+		return; /* avoid de-referencing buf->buf */
+	check_char(buf->buf[0], ==, ch);
+	check_char(buf->buf[1], ==, '\0');
+}
+
+static void t_addstr(struct strbuf *buf, void *data)
+{
+	const char *text = data;
+	size_t len = strlen(text);
+
+	strbuf_addstr(buf, text);
+	if (check_uint(buf->len, ==, len) ||
+	    check_uint(buf->alloc, >, len) ||
+	    check_char(buf->buf[len], ==, '\0'))
+	    return;
+	check_str(buf->buf, text);
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	if (TEST(t_static_init(), "static initialization works"))
+		test_skip_all("STRBUF_INIT is broken");
+	TEST(t_dynamic_init(), "dynamic initialization works");
+	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
+	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
+	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
+
+	return test_done();
+}
diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
new file mode 100644
index 0000000000..70030d587f
--- /dev/null
+++ b/t/unit-tests/test-lib.c
@@ -0,0 +1,329 @@
+#include "test-lib.h"
+
+enum result {
+	RESULT_NONE,
+	RESULT_FAILURE,
+	RESULT_SKIP,
+	RESULT_SUCCESS,
+	RESULT_TODO
+};
+
+static struct {
+	enum result result;
+	int count;
+	unsigned failed :1;
+	unsigned lazy_plan :1;
+	unsigned running :1;
+	unsigned skip_all :1;
+	unsigned todo :1;
+} ctx = {
+	.lazy_plan = 1,
+	.result = RESULT_NONE,
+};
+
+static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
+{
+	fflush(stderr);
+	if (prefix)
+		fprintf(stdout, "%s", prefix);
+	vprintf(format, ap); /* TODO: handle newlines */
+	putc('\n', stdout);
+	fflush(stdout);
+}
+
+void test_msg(const char *format, ...)
+{
+	va_list ap;
+
+	va_start(ap, format);
+	msg_with_prefix("# ", format, ap);
+	va_end(ap);
+}
+
+void test_plan(int count)
+{
+	assert(!ctx.running);
+
+	fflush(stderr);
+	printf("1..%d\n", count);
+	fflush(stdout);
+	ctx.lazy_plan = 0;
+}
+
+int test_done(void)
+{
+	assert(!ctx.running);
+
+	if (ctx.lazy_plan)
+		test_plan(ctx.count);
+
+	return ctx.failed;
+}
+
+void test_skip(const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	if (format)
+		msg_with_prefix("# skipping test - ", format, ap);
+	va_end(ap);
+}
+
+void test_skip_all(const char *format, ...)
+{
+	va_list ap;
+	const char *prefix;
+
+	if (!ctx.count && ctx.lazy_plan) {
+		/* We have not printed a test plan yet */
+		prefix = "1..0 # SKIP ";
+		ctx.lazy_plan = 0;
+	} else {
+		/* We have already printed a test plan */
+		prefix = "Bail out! # ";
+		ctx.failed = 1;
+	}
+	ctx.skip_all = 1;
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	msg_with_prefix(prefix, format, ap);
+	va_end(ap);
+}
+
+int test__run_begin(void)
+{
+	assert(!ctx.running);
+
+	ctx.count++;
+	ctx.result = RESULT_NONE;
+	ctx.running = 1;
+
+	return ctx.skip_all;
+}
+
+static void print_description(const char *format, va_list ap)
+{
+	if (format) {
+		fputs(" - ", stdout);
+		vprintf(format, ap);
+	}
+}
+
+int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	fflush(stderr);
+	va_start(ap, format);
+	if (!ctx.skip_all) {
+		switch (ctx.result) {
+		case RESULT_SUCCESS:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_FAILURE:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_TODO:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # TODO");
+			break;
+
+		case RESULT_SKIP:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # SKIP");
+			break;
+
+		case RESULT_NONE:
+			test_msg("BUG: test has no checks at %s", location);
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			ctx.result = RESULT_FAILURE;
+			break;
+		}
+	}
+	va_end(ap);
+	ctx.running = 0;
+	if (ctx.skip_all)
+		return 0;
+	putc('\n', stdout);
+	fflush(stdout);
+	ctx.failed |= ctx.result == RESULT_FAILURE;
+
+	return -(ctx.result == RESULT_FAILURE);
+}
+
+static void test_fail(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	ctx.result = RESULT_FAILURE;
+}
+
+static void test_pass(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result == RESULT_NONE)
+		ctx.result = RESULT_SUCCESS;
+}
+
+static void test_todo(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result != RESULT_FAILURE)
+		ctx.result = RESULT_TODO;
+}
+
+int test_assert(const char *location, const char *check, int ok)
+{
+	assert(ctx.running);
+
+	if (ctx.result == RESULT_SKIP) {
+		test_msg("skipping check '%s' at %s", check, location);
+		return 0;
+	} else if (!ctx.todo) {
+		if (ok) {
+			test_pass();
+		} else {
+			test_msg("check \"%s\" failed at %s", check, location);
+			test_fail();
+		}
+	}
+
+	return -!ok;
+}
+
+void test__todo_begin(void)
+{
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	ctx.todo = 1;
+}
+
+int test__todo_end(const char *location, const char *check, int res)
+{
+	assert(ctx.running);
+	assert(ctx.todo);
+
+	ctx.todo = 0;
+	if (ctx.result == RESULT_SKIP)
+		return 0;
+	if (!res) {
+		test_msg("todo check '%s' succeeded at %s", check, location);
+		test_fail();
+	} else {
+		test_todo();
+	}
+
+	return -!res;
+}
+
+int check_bool_loc(const char *loc, const char *check, int ok)
+{
+	return test_assert(loc, check, ok);
+}
+
+union test__tmp test__tmp[2];
+
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		test_msg("   left: %"PRIdMAX, a);
+		test_msg("  right: %"PRIdMAX, b);
+	}
+
+	return ret;
+}
+
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		test_msg("   left: %"PRIuMAX, a);
+		test_msg("  right: %"PRIuMAX, b);
+	}
+
+	return ret;
+}
+
+static void print_one_char(char ch, char quote)
+{
+	if ((unsigned char)ch < 0x20u || ch == 0x7f) {
+		/* TODO: improve handling of \a, \b, \f ... */
+		printf("\\%03o", (unsigned char)ch);
+	} else {
+		if (ch == '\\' || ch == quote)
+			putc('\\', stdout);
+		putc(ch, stdout);
+	}
+}
+
+static void print_char(const char *prefix, char ch)
+{
+	printf("# %s: '", prefix);
+	print_one_char(ch, '\'');
+	fputs("'\n", stdout);
+}
+
+int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		fflush(stderr);
+		print_char("   left", a);
+		print_char("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
+
+static void print_str(const char *prefix, const char *str)
+{
+	printf("# %s: ", prefix);
+	if (!str) {
+		fputs("NULL\n", stdout);
+	} else {
+		putc('"', stdout);
+		while (*str)
+			print_one_char(*str++, '"');
+		fputs("\"\n", stdout);
+	}
+}
+
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b)
+{
+	int ok = (!a && !b) || (a && b && !strcmp(a, b));
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		fflush(stderr);
+		print_str("   left", a);
+		print_str("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
new file mode 100644
index 0000000000..720c97c6f8
--- /dev/null
+++ b/t/unit-tests/test-lib.h
@@ -0,0 +1,143 @@
+#ifndef TEST_LIB_H
+#define TEST_LIB_H
+
+#include "git-compat-util.h"
+
+/*
+ * Run a test function, returns 0 if the test succeeds, -1 if it
+ * fails. If test_skip_all() has been called then the test will not be
+ * run. The description for each test should be unique. For example:
+ *
+ *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
+ */
+#define TEST(t, ...)					\
+	test__run_end(test__run_begin() ? 0 : (t, 1),	\
+		      TEST_LOCATION(),  __VA_ARGS__)
+
+/*
+ * Print a test plan, should be called before any tests. If the number
+ * of tests is not known in advance test_done() will automatically
+ * print a plan at the end of the test program.
+ */
+void test_plan(int count);
+
+/*
+ * test_done() must be called at the end of main(). It will print the
+ * plan if plan() was not called at the beginning of the test program
+ * and returns the exit code for the test program.
+ */
+int test_done(void);
+
+/* Skip the current test. */
+__attribute__((format (printf, 1, 2)))
+void test_skip(const char *format, ...);
+
+/* Skip all remaining tests. */
+__attribute__((format (printf, 1, 2)))
+void test_skip_all(const char *format, ...);
+
+/* Print a diagnostic message to stdout. */
+__attribute__((format (printf, 1, 2)))
+void test_msg(const char *format, ...);
+
+/*
+ * Test checks are built around test_assert(). checks return 0 on
+ * success, -1 on failure. If any check fails then the test will
+ * fail. To create a custom check define a function that wraps
+ * test_assert() and a macro to wrap that function. For example:
+ *
+ *  static int check_oid_loc(const char *loc, const char *check,
+ *			     struct object_id *a, struct object_id *b)
+ *  {
+ *	    int res = test_assert(loc, check, oideq(a, b));
+ *
+ *	    if (res) {
+ *		    test_msg("   left: %s", oid_to_hex(a);
+ *		    test_msg("  right: %s", oid_to_hex(a);
+ *
+ *	    }
+ *	    return res;
+ *  }
+ *
+ *  #define check_oid(a, b) \
+ *	    check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
+ */
+int test_assert(const char *location, const char *check, int ok);
+
+/* Helper macro to pass the location to checks */
+#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
+
+/* Check a boolean condition. */
+#define check(x)				\
+	check_bool_loc(TEST_LOCATION(), #x, x)
+int check_bool_loc(const char *loc, const char *check, int ok);
+
+/*
+ * Compare two integers. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_int(a, op, b)						\
+	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
+	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+		       test__tmp[0].i op test__tmp[1].i, a, b))
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b);
+
+/*
+ * Compare two unsigned integers. Prints a message with the two values
+ * if the comparison fails. NB this is not thread safe.
+ */
+#define check_uint(a, op, b)						\
+	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
+	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].u op test__tmp[1].u, a, b))
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b);
+
+/*
+ * Compare two chars. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_char(a, op, b)						\
+	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
+	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].c op test__tmp[1].c, a, b))
+int check_char_loc(const char *loc, const char *check, int ok,
+		   char a, char b);
+
+/* Check whether two strings are equal. */
+#define check_str(a, b)							\
+	check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b);
+
+/*
+ * Wrap a check that is known to fail. If the check succeeds then the
+ * test will fail. Returns 0 if the check fails, -1 if it
+ * succeeds. For example:
+ *
+ *  TEST_TODO(check(0));
+ */
+#define TEST_TODO(check) \
+	(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
+
+/* Private helpers */
+
+#define TEST__STR(x) #x
+#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
+
+union test__tmp {
+	intmax_t i;
+	uintmax_t u;
+	char c;
+};
+
+extern union test__tmp test__tmp[2];
+
+int test__run_begin(void);
+__attribute__((format (printf, 3, 4)))
+int test__run_end(int, const char *, const char *, ...);
+void test__todo_begin(void);
+int test__todo_end(const char *, const char *, int);
+
+#endif /* TEST_LIB_H */
-- 
2.41.0.694.ge786442a9b-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v6 3/3] ci: run unit tests in CI
  2023-08-16 23:50   ` [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Josh Steadmon
  2023-08-16 23:50     ` [PATCH v6 1/3] unit tests: Add a project plan document Josh Steadmon
  2023-08-16 23:50     ` [PATCH v6 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-08-16 23:50     ` Josh Steadmon
  2 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-16 23:50 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

Run unit tests in both Cirrus and GitHub CI. For sharded CI instances
(currently just Windows on GitHub), run only on the first shard. This is
OK while we have only a single unit test executable, but we may wish to
distribute tests more evenly when we add new unit tests in the future.

We may also want to add more status output in our unit test framework,
so that we can do similar post-processing as in
ci/lib.sh:handle_failed_tests().

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 .cirrus.yml               | 2 +-
 ci/run-build-and-tests.sh | 2 ++
 ci/run-test-slice.sh      | 5 +++++
 t/Makefile                | 2 +-
 4 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 4860bebd32..b6280692d2 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -19,4 +19,4 @@ freebsd_12_task:
   build_script:
     - su git -c gmake
   test_script:
-    - su git -c 'gmake test'
+    - su git -c 'gmake DEFAULT_UNIT_TEST_TARGET=unit-tests-prove test unit-tests'
diff --git a/ci/run-build-and-tests.sh b/ci/run-build-and-tests.sh
index 2528f25e31..7a1466b868 100755
--- a/ci/run-build-and-tests.sh
+++ b/ci/run-build-and-tests.sh
@@ -50,6 +50,8 @@ if test -n "$run_tests"
 then
 	group "Run tests" make test ||
 	handle_failed_tests
+	group "Run unit tests" \
+		make DEFAULT_UNIT_TEST_TARGET=unit-tests-prove unit-tests
 fi
 check_unignored_build_artifacts
 
diff --git a/ci/run-test-slice.sh b/ci/run-test-slice.sh
index a3c67956a8..ae8094382f 100755
--- a/ci/run-test-slice.sh
+++ b/ci/run-test-slice.sh
@@ -15,4 +15,9 @@ group "Run tests" make --quiet -C t T="$(cd t &&
 	tr '\n' ' ')" ||
 handle_failed_tests
 
+# We only have one unit test at the moment, so run it in the first slice
+if [ "$1" == "0" ] ; then
+	group "Run unit tests" make --quiet -C t unit-tests-prove
+fi
+
 check_unignored_build_artifacts
diff --git a/t/Makefile b/t/Makefile
index 2db8b3adb1..095334bfde 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -42,7 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
-UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/t-*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
-- 
2.41.0.694.ge786442a9b-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 2/3] unit tests: add TAP unit test framework
  2023-08-16 23:50     ` [PATCH v6 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-08-17  0:12       ` Junio C Hamano
  2023-08-17  0:41         ` Junio C Hamano
  0 siblings, 1 reply; 67+ messages in thread
From: Junio C Hamano @ 2023-08-17  0:12 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, linusa, calvinwan, phillip.wood123, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> diff --git a/Makefile b/Makefile
> index e440728c24..4016da6e39 100644
>
> --- a/Makefile
> +++ b/Makefile

With that blank line, I seem to be getting

    Applying: unit tests: add TAP unit test framework
    error: patch with only garbage at line 3
    Patch failed at 0002 unit tests: add TAP unit test framework	

And with that blank line removed, I seem to then get

    Applying: unit tests: add TAP unit test framework
    error: patch failed: Makefile:682
    error: Makefile: patch does not apply
    error: patch failed: t/Makefile:41
    error: t/Makefile: patch does not apply

This is on top of "The fifth batch", the commit your cover letter
refers to as the base of the series, so I am puzzled...


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 2/3] unit tests: add TAP unit test framework
  2023-08-17  0:12       ` Junio C Hamano
@ 2023-08-17  0:41         ` Junio C Hamano
  2023-08-17 18:34           ` Josh Steadmon
  0 siblings, 1 reply; 67+ messages in thread
From: Junio C Hamano @ 2023-08-17  0:41 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, linusa, calvinwan, phillip.wood123, rsbecker

Junio C Hamano <gitster@pobox.com> writes:

> Josh Steadmon <steadmon@google.com> writes:
>
>> diff --git a/Makefile b/Makefile
>> index e440728c24..4016da6e39 100644
>>
>> --- a/Makefile
>> +++ b/Makefile
>
> With that blank line, I seem to be getting
>
>     Applying: unit tests: add TAP unit test framework
>     error: patch with only garbage at line 3
>     Patch failed at 0002 unit tests: add TAP unit test framework	
>
> And with that blank line removed, I seem to then get
>
>     Applying: unit tests: add TAP unit test framework
>     error: patch failed: Makefile:682
>     error: Makefile: patch does not apply
>     error: patch failed: t/Makefile:41
>     error: t/Makefile: patch does not apply
>
> This is on top of "The fifth batch", the commit your cover letter
> refers to as the base of the series, so I am puzzled...

Well, I suspected that 2/3 comes from
https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/
which itself is whitespace damaged but has a reference to the
unit-tests branch of https://github.com/phillipwood/git repository.

But it seems to be different in subtle ways.

Please send a set of patches that can be applied cleanly (especially
when it is not an RFC series).

Thanks.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v5] unit tests: Add a project plan document
  2023-08-15 22:55       ` Josh Steadmon
@ 2023-08-17  9:05         ` Phillip Wood
  0 siblings, 0 replies; 67+ messages in thread
From: Phillip Wood @ 2023-08-17  9:05 UTC (permalink / raw)
  To: Josh Steadmon, phillip.wood, git, linusa, calvinwan, gitster

Hi Josh

On 15/08/2023 23:55, Josh Steadmon wrote:
> On 2023.08.14 14:29, Phillip Wood wrote:
>> [...]

>> I don't have a strong preference for which harness we use so long as it
>> provides a way to (a) run tests that previously failed tests first and (b)
>> run slow tests first. I do have a strong preference for using the same
>> harness for both the unit tests and the integration tests so developers
>> don't have to learn two different tools. Unless there is a problem with
>> prove it would probably make sense just to keep using that as the project
>> test harness.
> 
> To be clear, it sounds like both of these can be done with `prove`
> (using the various --state settings) without any further support from
> our unit tests, right? 

Yes

> I see that we do have a "failed" target for
> re-running integration tests, but that relies on some test-lib.sh
> features that currently have no equivalent in the unit test framework.

Ooh, I didn't know about that, I think we could add something similar to 
the framework if we wanted.

> [...]
>> It sounds like we're getting to the point where we have pinned down our
>> requirements and the available alternatives well enough to make a decision.
> 
> Yes, v6 will include your TAP implementation (I assume you are still OK
> if I include your patch in this series?).

Yes that's fine. I'm about to go off the list for a couple of weeks, 
I'll take a proper look at v6 once I'm back.

Best Wishes

Phillip

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v6 2/3] unit tests: add TAP unit test framework
  2023-08-17  0:41         ` Junio C Hamano
@ 2023-08-17 18:34           ` Josh Steadmon
  0 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-17 18:34 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git, linusa, calvinwan, phillip.wood123, rsbecker

On 2023.08.16 17:41, Junio C Hamano wrote:
> Junio C Hamano <gitster@pobox.com> writes:
> 
> > Josh Steadmon <steadmon@google.com> writes:
> >
> >> diff --git a/Makefile b/Makefile
> >> index e440728c24..4016da6e39 100644
> >>
> >> --- a/Makefile
> >> +++ b/Makefile
> >
> > With that blank line, I seem to be getting
> >
> >     Applying: unit tests: add TAP unit test framework
> >     error: patch with only garbage at line 3
> >     Patch failed at 0002 unit tests: add TAP unit test framework	
> >
> > And with that blank line removed, I seem to then get
> >
> >     Applying: unit tests: add TAP unit test framework
> >     error: patch failed: Makefile:682
> >     error: Makefile: patch does not apply
> >     error: patch failed: t/Makefile:41
> >     error: t/Makefile: patch does not apply
> >
> > This is on top of "The fifth batch", the commit your cover letter
> > refers to as the base of the series, so I am puzzled...
> 
> Well, I suspected that 2/3 comes from
> https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/
> which itself is whitespace damaged but has a reference to the
> unit-tests branch of https://github.com/phillipwood/git repository.
> 
> But it seems to be different in subtle ways.
> 
> Please send a set of patches that can be applied cleanly (especially
> when it is not an RFC series).
> 
> Thanks.

Sorry about the noise; I'm not sure how but the patch 2 commit message
got a partial diff pasted in. I've fixed this in v7 which I'll send
shortly.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v7 0/3] Add unit test framework and project plan
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
                     ` (3 preceding siblings ...)
  2023-08-16 23:50   ` [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Josh Steadmon
@ 2023-08-17 18:37   ` Josh Steadmon
  2023-08-17 18:37     ` [PATCH v7 1/3] unit tests: Add a project plan document Josh Steadmon
                       ` (4 more replies)
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
                     ` (2 subsequent siblings)
  7 siblings, 5 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-17 18:37 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This series begins with a project document covering our goals for adding
unit tests and a discussion of alternative frameworks considered, as
well as the features used to evaluate them. A rendered preview of this
doc can be found at [2]. It also adds Phillip Wood's TAP implemenation
(with some slightly re-worked Makefile rules) and a sample strbuf unit
test. Finally, we modify the configs for GitHub and Cirrus CI to run the
unit tests. Sample runs showing successful CI runs can be found at [3],
[4], and [5].

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc
[3] https://github.com/steadmon/git/actions/runs/5884659246/job/15959781385#step:4:1803
[4] https://github.com/steadmon/git/actions/runs/5884659246/job/15959938401#step:5:186
[5] https://cirrus-ci.com/task/6126304366428160 (unrelated tests failed,
    but note that t-strbuf ran successfully)

In addition to reviewing the patches in this series, reviewers can help
this series progress by chiming in on these remaining TODOs:
- Figure out how to ensure tests run on additional OSes such as NonStop
- Figure out if we should collect unit tests statistics similar to the
  "counts" files for shell tests
- Decide if it's OK to wait on sharding unit tests across "sliced" CI
  instances
- Provide guidelines for writing new unit tests

Changes in v7:
- Fix corrupt diff in patch #2, sorry for the noise.

Changes in v6:
- Officially recommend using Phillip Wood's TAP framework
- Add an example strbuf unit test using the TAP framework as well as
  Makefile integration
- Run unit tests in CI

Changes in v5:
- Add comparison point "License".
- Discuss feature priorities
- Drop frameworks:
  - Incompatible licenses: libtap, cmocka
  - Missing source: MyTAP
  - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
- Drop comparison point "Coverage reports": this can generally be
  handled by tools such as `gcov` regardless of the framework used.
- Drop comparison point "Inline tests": there didn't seem to be
  strong interest from reviewers for this feature.
- Drop comparison point "Scheduling / re-running": this was not
  supported by any of the main contenders, and is generally better
  handled by the harness rather than framework.
- Drop comparison point "Lazy test planning": this was supported by
  all frameworks that provide TAP output.

Changes in v4:
- Add link anchors for the framework comparison dimensions
- Explain "Partial" results for each dimension
- Use consistent dimension names in the section headers and comparison
  tables
- Add "Project KLOC", "Adoption", and "Inline tests" dimensions
- Fill in a few of the missing entries in the comparison table

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com


Josh Steadmon (2):
  unit tests: Add a project plan document
  ci: run unit tests in CI

Phillip Wood (1):
  unit tests: add TAP unit test framework

 .cirrus.yml                            |   2 +-
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 220 +++++++++++++++++
 Makefile                               |  24 +-
 ci/run-build-and-tests.sh              |   2 +
 ci/run-test-slice.sh                   |   5 +
 t/Makefile                             |  15 +-
 t/t0080-unit-test-output.sh            |  58 +++++
 t/unit-tests/.gitignore                |   2 +
 t/unit-tests/t-basic.c                 |  95 +++++++
 t/unit-tests/t-strbuf.c                |  75 ++++++
 t/unit-tests/test-lib.c                | 329 +++++++++++++++++++++++++
 t/unit-tests/test-lib.h                | 143 +++++++++++
 13 files changed, 966 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/technical/unit-tests.txt
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

Range-diff against v6:
-:  ---------- > 1:  81c5148a12 unit tests: Add a project plan document
1:  ca284c575e ! 2:  3cc98d4045 unit tests: add TAP unit test framework
    @@ Commit message
         Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
         Signed-off-by: Josh Steadmon <steadmon@google.com>
     
    -    diff --git a/Makefile b/Makefile
    -    index e440728c24..4016da6e39 100644
    -
    -    --- a/Makefile
    -    +++ b/Makefile
    -    @@ -682,6 +682,8 @@ TEST_BUILTINS_OBJS =
    -     TEST_OBJS =
    -     TEST_PROGRAMS_NEED_X =
    -     THIRD_PARTY_SOURCES =
    -    +UNIT_TEST_PROGRAMS =
    -    +UNIT_TEST_DIR = t/unit-tests
    -
    -     # Having this variable in your environment would break pipelines because
    -     # you cause "cd" to echo its destination to stdout.  It can also take
    -    @@ -1331,6 +1333,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
    -     THIRD_PARTY_SOURCES += sha1collisiondetection/%
    -     THIRD_PARTY_SOURCES += sha1dc/%
    -
    -    +UNIT_TEST_PROGRAMS += t-basic
    -    +UNIT_TEST_PROGRAMS += t-strbuf
    -    +UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_DIR)/%$X,$(UNIT_TEST_PROGRAMS))
    -    +UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
    -    +UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
    -    +
    -     # xdiff and reftable libs may in turn depend on what is in libgit.a
    -     GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
    -     EXTLIBS =
    -    @@ -2672,6 +2680,7 @@ OBJECTS += $(TEST_OBJS)
    -     OBJECTS += $(XDIFF_OBJS)
    -     OBJECTS += $(FUZZ_OBJS)
    -     OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
    -    +OBJECTS += $(UNIT_TEST_OBJS)
    -
    -     ifndef NO_CURL
    -            OBJECTS += http.o http-walker.o remote-curl.o
    -    @@ -3167,7 +3176,7 @@ endif
    -
    -     test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
    -
    -    -all:: $(TEST_PROGRAMS) $(test_bindir_programs)
    -    +all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
    -
    -     bin-wrappers/%: wrap-for-bin.sh
    -            $(call mkdir_p_parent_template)
    -    @@ -3592,7 +3601,7 @@ endif
    -
    -     artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
    -                    GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
    -    -               $(MOFILES)
    -    +               $(UNIT_TEST_PROGS) $(MOFILES)
    -            $(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
    -                    SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
    -            test -n "$(ARTIFACTS_DIRECTORY)"
    -    @@ -3653,7 +3662,7 @@ clean: profile-clean coverage-clean cocciclean
    -            $(RM) $(OBJECTS)
    -            $(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
    -            $(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
    -    -       $(RM) $(TEST_PROGRAMS)
    -    +       $(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
    -            $(RM) $(FUZZ_PROGRAMS)
    -            $(RM) $(SP_OBJ)
    -            $(RM) $(HCC)
    -    @@ -3831,3 +3840,12 @@ $(FUZZ_PROGRAMS): all
    -                    $(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
    -
    -     fuzz-all: $(FUZZ_PROGRAMS)
    -    +
    -    +$(UNIT_TEST_PROGS): $(UNIT_TEST_DIR)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS
    -    +       $(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
    -    +               $(filter %.o,$^) $(filter %.a,$^) $(LIBS)
    -    +
    -    +.PHONY: build-unit-tests unit-tests
    -    +build-unit-tests: $(UNIT_TEST_PROGS)
    -    +unit-tests: $(UNIT_TEST_PROGS)
    -    +       $(MAKE) -C t/ unit-tests
    -    diff --git a/t/Makefile b/t/Makefile
    -    index 3e00cdd801..92864cdf28 100644
    -    --- a/t/Makefile
    -    +++ b/t/Makefile
    -    @@ -41,6 +41,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
    -     TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
    -     CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
    -     CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
    -    +UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
    -
    -     # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
    -     # checks all tests in all scripts via a single invocation, so tell individual
    -    @@ -65,6 +66,13 @@ prove: pre-clean check-chainlint $(TEST_LINT)
    -     $(T):
    -            @echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
    -
    -    +$(UNIT_TESTS):
    -    +       @echo "*** $@ ***"; $@
    -    +
    -    +.PHONY: unit-tests
    -    +unit-tests:
    -    +       @echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
    -    +
    -     pre-clean:
    -            $(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
    -
    -    @@ -149,4 +157,4 @@ perf:
    -            $(MAKE) -C perf/ all
    -
    -     .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
    -    -       check-chainlint clean-chainlint test-chainlint
    -    +       check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
    -    diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
    -    new file mode 100755
    -    index 0000000000..c60e402260
    -    --- /dev/null
    -    +++ b/t/t0080-unit-test-output.sh
    -    @@ -0,0 +1,58 @@
    -    +#!/bin/sh
    -    +
    -    +test_description='Test the output of the unit test framework'
    -    +
    -    +. ./test-lib.sh
    -    +
    -    +test_expect_success 'TAP output from unit tests' '
    -    +       cat >expect <<-EOF &&
    -    +       ok 1 - passing test
    -    +       ok 2 - passing test and assertion return 0
    -    +       # check "1 == 2" failed at t/unit-tests/t-basic.c:68
    -    +       #    left: 1
    -    +       #   right: 2
    -    +       not ok 3 - failing test
    -    +       ok 4 - failing test and assertion return -1
    -    +       not ok 5 - passing TEST_TODO() # TODO
    -    +       ok 6 - passing TEST_TODO() returns 0
    -    +       # todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:17
    -    +       not ok 7 - failing TEST_TODO()
    -    +       ok 8 - failing TEST_TODO() returns -1
    -    +       # check "0" failed at t/unit-tests/t-basic.c:22
    -    +       # skipping test - missing prerequisite
    -    +       # skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:24
    -    +       ok 9 - test_skip() # SKIP
    -    +       ok 10 - skipped test returns 0
    -    +       # skipping test - missing prerequisite
    -    +       ok 11 - test_skip() inside TEST_TODO() # SKIP
    -    +       ok 12 - test_skip() inside TEST_TODO() returns 0
    -    +       # check "0" failed at t/unit-tests/t-basic.c:40
    -    +       not ok 13 - TEST_TODO() after failing check
    -    +       ok 14 - TEST_TODO() after failing check returns -1
    -    +       # check "0" failed at t/unit-tests/t-basic.c:48
    -    +       not ok 15 - failing check after TEST_TODO()
    -    +       ok 16 - failing check after TEST_TODO() returns -1
    -    +       # check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:53
    -    +       #    left: "\011hello\\\\"
    -    +       #   right: "there\"\012"
    -    +       # check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:54
    -    +       #    left: "NULL"
    -    +       #   right: NULL
    -    +       # check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:55
    -    +       #    left: ${SQ}a${SQ}
    -    +       #   right: ${SQ}\012${SQ}
    -    +       # check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:56
    -    +       #    left: ${SQ}\\\\${SQ}
    -    +       #   right: ${SQ}\\${SQ}${SQ}
    -    +       not ok 17 - messages from failing string and char comparison
    -    +       # BUG: test has no checks at t/unit-tests/t-basic.c:83
    -    +       not ok 18 - test with no checks
    -    +       ok 19 - test with no checks returns -1
    -    +       1..19
    -    +       EOF
    -    +
    -    +       ! "$GIT_BUILD_DIR"/t/unit-tests/t-basic >actual &&
    -    +       test_cmp expect actual
    -    +'
    -    +
    -    +test_done
    -    diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
    -    new file mode 100644
    -    index 0000000000..e292d58348
    -    --- /dev/null
    -    +++ b/t/unit-tests/.gitignore
    -    @@ -0,0 +1,2 @@
    -    +/t-basic
    -    +/t-strbuf
    -    diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
    -    new file mode 100644
    -    index 0000000000..ab0b7682c4
    -    --- /dev/null
    -    +++ b/t/unit-tests/t-basic.c
    -    @@ -0,0 +1,87 @@
    -    +#include "test-lib.h"
    -    +
    -    +/* Used to store the return value of check_int(). */
    -    +static int check_res;
    -    +
    -    +/* Used to store the return value of TEST(). */
    -    +static int test_res;
    -    +
    -    +static void t_res(int expect)
    -    +{
    -    +       check_int(check_res, ==, expect);
    -    +       check_int(test_res, ==, expect);
    -    +}
    -    +
    -    +static void t_todo(int x)
    -    +{
    -    +       check_res = TEST_TODO(check(x));
    -    +}
    -    +
    -    +static void t_skip(void)
    -    +{
    -    +       check(0);
    -    +       test_skip("missing prerequisite");
    -    +       check(1);
    -    +}
    -    +
    -    +static int do_skip(void)
    -    +{
    -    +       test_skip("missing prerequisite");
    -    +       return 0;
    -    +}
    -    +
    -    +static void t_skip_todo(void)
    -    +{
    -    +       check_res = TEST_TODO(do_skip());
    -    +}
    -    +
    -    +static void t_todo_after_fail(void)
    -    +{
    -    +       check(0);
    -    +       TEST_TODO(check(0));
    -    +}
    -    +
    -    +static void t_fail_after_todo(void)
    -    +{
    -    +       check(1);
    -    +       TEST_TODO(check(0));
    -    +       check(0);
    -    +}
    -    +
    -    +static void t_messages(void)
    -    +{
    -    +       check_str("\thello\\", "there\"\n");
    -    +       check_str("NULL", NULL);
    -    +       check_char('a', ==, '\n');
    -    +       check_char('\\', ==, '\'');
    -    +}
    -    +
    -    +static void t_empty(void)
    -    +{
    -    +       ; /* empty */
    -    +}
    -    +
    -    +int cmd_main(int argc, const char **argv)
    -    +{
    -    +       test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
    -    +       TEST(t_res(0), "passing test and assertion return 0");
    -    +       test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
    -    +       TEST(t_res(-1), "failing test and assertion return -1");
    -    +       test_res = TEST(t_todo(0), "passing TEST_TODO()");
    -    +       TEST(t_res(0), "passing TEST_TODO() returns 0");
    -    +       test_res = TEST(t_todo(1), "failing TEST_TODO()");
    -    +       TEST(t_res(-1), "failing TEST_TODO() returns -1");
    -    +       test_res = TEST(t_skip(), "test_skip()");
    -    +       TEST(check_int(test_res, ==, 0), "skipped test returns 0");
    -    +       test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
    -    +       TEST(t_res(0), "test_skip() inside TEST_TODO() returns 0");
    -    +       test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
    -    +       TEST(check_int(test_res, ==, -1), "TEST_TODO() after failing check returns -1");
    -    +       test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
    -    +       TEST(check_int(test_res, ==, -1), "failing check after TEST_TODO() returns -1");
    -    +       TEST(t_messages(), "messages from failing string and char comparison");
    -    +       test_res = TEST(t_empty(), "test with no checks");
    -    +       TEST(check_int(test_res, ==, -1), "test with no checks returns -1");
    -    +
    -    +       return test_done();
    -    +}
    -    diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
    -    new file mode 100644
    -    index 0000000000..561611e242
    -    --- /dev/null
    -    +++ b/t/unit-tests/t-strbuf.c
    -    @@ -0,0 +1,75 @@
    -    +#include "test-lib.h"
    -    +#include "strbuf.h"
    -    +
    -    +/* wrapper that supplies tests with an initialized strbuf */
    -    +static void setup(void (*f)(struct strbuf*, void*), void *data)
    -    +{
    -    +       struct strbuf buf = STRBUF_INIT;
    -    +
    -    +       f(&buf, data);
    -    +       strbuf_release(&buf);
    -    +       check_uint(buf.len, ==, 0);
    -    +       check_uint(buf.alloc, ==, 0);
    -    +       check(buf.buf == strbuf_slopbuf);
    -    +       check_char(buf.buf[0], ==, '\0');
    -    +}
    -    +
    -    +static void t_static_init(void)
    -    +{
    -    +       struct strbuf buf = STRBUF_INIT;
    -    +
    -    +       check_uint(buf.len, ==, 0);
    -    +       check_uint(buf.alloc, ==, 0);
    -    +       if (check(buf.buf == strbuf_slopbuf))
    -    +               return; /* avoid de-referencing buf.buf */
    -    +       check_char(buf.buf[0], ==, '\0');
    -    +}
    -    +
    -    +static void t_dynamic_init(void)
    -    +{
    -    +       struct strbuf buf;
    -    +
    -    +       strbuf_init(&buf, 1024);
    -    +       check_uint(buf.len, ==, 0);
    -    +       check_uint(buf.alloc, >=, 1024);
    -    +       check_char(buf.buf[0], ==, '\0');
    -    +       strbuf_release(&buf);
    -    +}
    -    +
    -    +static void t_addch(struct strbuf *buf, void *data)
    -    +{
    -    +       const char *p_ch = data;
    -    +       const char ch = *p_ch;
    -    +
    -    +       strbuf_addch(buf, ch);
    -    +       if (check_uint(buf->len, ==, 1) ||
    -    +           check_uint(buf->alloc, >, 1))
    -    +               return; /* avoid de-referencing buf->buf */
    -    +       check_char(buf->buf[0], ==, ch);
    -    +       check_char(buf->buf[1], ==, '\0');
    -    +}
    -    +
    -    +static void t_addstr(struct strbuf *buf, void *data)
    -    +{
    -    +       const char *text = data;
    -    +       size_t len = strlen(text);
    -    +
    -    +       strbuf_addstr(buf, text);
    -    +       if (check_uint(buf->len, ==, len) ||
    -    +           check_uint(buf->alloc, >, len) ||
    -    +           check_char(buf->buf[len], ==, '\0'))
    -    +           return;
    -    +       check_str(buf->buf, text);
    -    +}
    -    +
    -    +int cmd_main(int argc, const char **argv)
    -    +{
    -    +       if (TEST(t_static_init(), "static initialization works"))
    -    +               test_skip_all("STRBUF_INIT is broken");
    -    +       TEST(t_dynamic_init(), "dynamic initialization works");
    -    +       TEST(setup(t_addch, "a"), "strbuf_addch adds char");
    -    +       TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
    -    +       TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
    -    +
    -    +       return test_done();
    -    +}
    -    diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
    -    new file mode 100644
    -    index 0000000000..70030d587f
    -    --- /dev/null
    -    +++ b/t/unit-tests/test-lib.c
    -    @@ -0,0 +1,329 @@
    -    +#include "test-lib.h"
    -    +
    -    +enum result {
    -    +       RESULT_NONE,
    -    +       RESULT_FAILURE,
    -    +       RESULT_SKIP,
    -    +       RESULT_SUCCESS,
    -    +       RESULT_TODO
    -    +};
    -    +
    -    +static struct {
    -    +       enum result result;
    -    +       int count;
    -    +       unsigned failed :1;
    -    +       unsigned lazy_plan :1;
    -    +       unsigned running :1;
    -    +       unsigned skip_all :1;
    -    +       unsigned todo :1;
    -    +} ctx = {
    -    +       .lazy_plan = 1,
    -    +       .result = RESULT_NONE,
    -    +};
    -    +
    -    +static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
    -    +{
    -    +       fflush(stderr);
    -    +       if (prefix)
    -    +               fprintf(stdout, "%s", prefix);
    -    +       vprintf(format, ap); /* TODO: handle newlines */
    -    +       putc('\n', stdout);
    -    +       fflush(stdout);
    -    +}
    -    +
    -    +void test_msg(const char *format, ...)
    -    +{
    -    +       va_list ap;
    -    +
    -    +       va_start(ap, format);
    -    +       msg_with_prefix("# ", format, ap);
    -    +       va_end(ap);
    -    +}
    -    +
    -    +void test_plan(int count)
    -    +{
    -    +       assert(!ctx.running);
    -    +
    -    +       fflush(stderr);
    -    +       printf("1..%d\n", count);
    -    +       fflush(stdout);
    -    +       ctx.lazy_plan = 0;
    -    +}
    -    +
    -    +int test_done(void)
    -    +{
    -    +       assert(!ctx.running);
    -    +
    -    +       if (ctx.lazy_plan)
    -    +               test_plan(ctx.count);
    -    +
    -    +       return ctx.failed;
    -    +}
    -    +
    -    +void test_skip(const char *format, ...)
    -    +{
    -    +       va_list ap;
    -    +
    -    +       assert(ctx.running);
    -    +
    -    +       ctx.result = RESULT_SKIP;
    -    +       va_start(ap, format);
    -    +       if (format)
    -    +               msg_with_prefix("# skipping test - ", format, ap);
    -    +       va_end(ap);
    -    +}
    -    +
    -    +void test_skip_all(const char *format, ...)
    -    +{
    -    +       va_list ap;
    -    +       const char *prefix;
    -    +
    -    +       if (!ctx.count && ctx.lazy_plan) {
    -    +               /* We have not printed a test plan yet */
    -    +               prefix = "1..0 # SKIP ";
    -    +               ctx.lazy_plan = 0;
    -    +       } else {
    -    +               /* We have already printed a test plan */
    -    +               prefix = "Bail out! # ";
    -    +               ctx.failed = 1;
    -    +       }
    -    +       ctx.skip_all = 1;
    -    +       ctx.result = RESULT_SKIP;
    -    +       va_start(ap, format);
    -    +       msg_with_prefix(prefix, format, ap);
    -    +       va_end(ap);
    -    +}
    -    +
    -    +int test__run_begin(void)
    -    +{
    -    +       assert(!ctx.running);
    -    +
    -    +       ctx.count++;
    -    +       ctx.result = RESULT_NONE;
    -    +       ctx.running = 1;
    -    +
    -    +       return ctx.skip_all;
    -    +}
    -    +
    -    +static void print_description(const char *format, va_list ap)
    -    +{
    -    +       if (format) {
    -    +               fputs(" - ", stdout);
    -    +               vprintf(format, ap);
    -    +       }
    -    +}
    -    +
    -    +int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
    -    +{
    -    +       va_list ap;
    -    +
    -    +       assert(ctx.running);
    -    +       assert(!ctx.todo);
    -    +
    -    +       fflush(stderr);
    -    +       va_start(ap, format);
    -    +       if (!ctx.skip_all) {
    -    +               switch (ctx.result) {
    -    +               case RESULT_SUCCESS:
    -    +                       printf("ok %d", ctx.count);
    -    +                       print_description(format, ap);
    -    +                       break;
    -    +
    -    +               case RESULT_FAILURE:
    -    +                       printf("not ok %d", ctx.count);
    -    +                       print_description(format, ap);
    -    +                       break;
    -    +
    -    +               case RESULT_TODO:
    -    +                       printf("not ok %d", ctx.count);
    -    +                       print_description(format, ap);
    -    +                       printf(" # TODO");
    -    +                       break;
    -    +
    -    +               case RESULT_SKIP:
    -    +                       printf("ok %d", ctx.count);
    -    +                       print_description(format, ap);
    -    +                       printf(" # SKIP");
    -    +                       break;
    -    +
    -    +               case RESULT_NONE:
    -    +                       test_msg("BUG: test has no checks at %s", location);
    -    +                       printf("not ok %d", ctx.count);
    -    +                       print_description(format, ap);
    -    +                       ctx.result = RESULT_FAILURE;
    -    +                       break;
    -    +               }
    -    +       }
    -    +       va_end(ap);
    -    +       ctx.running = 0;
    -    +       if (ctx.skip_all)
    -    +               return 0;
    -    +       putc('\n', stdout);
    -    +       fflush(stdout);
    -    +       ctx.failed |= ctx.result == RESULT_FAILURE;
    -    +
    -    +       return -(ctx.result == RESULT_FAILURE);
    -    +}
    -    +
    -    +static void test_fail(void)
    -    +{
    -    +       assert(ctx.result != RESULT_SKIP);
    -    +
    -    +       ctx.result = RESULT_FAILURE;
    -    +}
    -    +
    -    +static void test_pass(void)
    -    +{
    -    +       assert(ctx.result != RESULT_SKIP);
    -    +
    -    +       if (ctx.result == RESULT_NONE)
    -    +               ctx.result = RESULT_SUCCESS;
    -    +}
    -    +
    -    +static void test_todo(void)
    -    +{
    -    +       assert(ctx.result != RESULT_SKIP);
    -    +
    -    +       if (ctx.result != RESULT_FAILURE)
    -    +               ctx.result = RESULT_TODO;
    -    +}
    -    +
    -    +int test_assert(const char *location, const char *check, int ok)
    -    +{
    -    +       assert(ctx.running);
    -    +
    -    +       if (ctx.result == RESULT_SKIP) {
    -    +               test_msg("skipping check '%s' at %s", check, location);
    -    +               return 0;
    -    +       } else if (!ctx.todo) {
    -    +               if (ok) {
    -    +                       test_pass();
    -    +               } else {
    -    +                       test_msg("check \"%s\" failed at %s", check, location);
    -    +                       test_fail();
    -    +               }
    -    +       }
    -    +
    -    +       return -!ok;
    -    +}
    -    +
    -    +void test__todo_begin(void)
    -    +{
    -    +       assert(ctx.running);
    -    +       assert(!ctx.todo);
    -    +
    -    +       ctx.todo = 1;
    -    +}
    -    +
    -    +int test__todo_end(const char *location, const char *check, int res)
    -    +{
    -    +       assert(ctx.running);
    -    +       assert(ctx.todo);
    -    +
    -    +       ctx.todo = 0;
    -    +       if (ctx.result == RESULT_SKIP)
    -    +               return 0;
    -    +       if (!res) {
    -    +               test_msg("todo check '%s' succeeded at %s", check, location);
    -    +               test_fail();
    -    +       } else {
    -    +               test_todo();
    -    +       }
    -    +
    -    +       return -!res;
    -    +}
    -    +
    -    +int check_bool_loc(const char *loc, const char *check, int ok)
    -    +{
    -    +       return test_assert(loc, check, ok);
    -    +}
    -    +
    -    +union test__tmp test__tmp[2];
    -    +
    -    +int check_int_loc(const char *loc, const char *check, int ok,
    -    +                 intmax_t a, intmax_t b)
    -    +{
    -    +       int ret = test_assert(loc, check, ok);
    -    +
    -    +       if (ret) {
    -    +               test_msg("   left: %"PRIdMAX, a);
    -    +               test_msg("  right: %"PRIdMAX, b);
    -    +       }
    -    +
    -    +       return ret;
    -    +}
    -    +
    -    +int check_uint_loc(const char *loc, const char *check, int ok,
    -    +                  uintmax_t a, uintmax_t b)
    -    +{
    -    +       int ret = test_assert(loc, check, ok);
    -    +
    -    +       if (ret) {
    -    +               test_msg("   left: %"PRIuMAX, a);
    -    +               test_msg("  right: %"PRIuMAX, b);
    -    +       }
    -    +
    -    +       return ret;
    -    +}
    -    +
    -    +static void print_one_char(char ch, char quote)
    -    +{
    -    +       if ((unsigned char)ch < 0x20u || ch == 0x7f) {
    -    +               /* TODO: improve handling of \a, \b, \f ... */
    -    +               printf("\\%03o", (unsigned char)ch);
    -    +       } else {
    -    +               if (ch == '\\' || ch == quote)
    -    +                       putc('\\', stdout);
    -    +               putc(ch, stdout);
    -    +       }
    -    +}
    -    +
    -    +static void print_char(const char *prefix, char ch)
    -    +{
    -    +       printf("# %s: '", prefix);
    -    +       print_one_char(ch, '\'');
    -    +       fputs("'\n", stdout);
    -    +}
    -    +
    -    +int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
    -    +{
    -    +       int ret = test_assert(loc, check, ok);
    -    +
    -    +       if (ret) {
    -    +               fflush(stderr);
    -    +               print_char("   left", a);
    -    +               print_char("  right", b);
    -    +               fflush(stdout);
    -    +       }
    -    +
    -    +       return ret;
    -    +}
    -    +
    -    +static void print_str(const char *prefix, const char *str)
    -    +{
    -    +       printf("# %s: ", prefix);
    -    +       if (!str) {
    -    +               fputs("NULL\n", stdout);
    -    +       } else {
    -    +               putc('"', stdout);
    -    +               while (*str)
    -    +                       print_one_char(*str++, '"');
    -    +               fputs("\"\n", stdout);
    -    +       }
    -    +}
    -    +
    -    +int check_str_loc(const char *loc, const char *check,
    -    +                 const char *a, const char *b)
    -    +{
    -    +       int ok = (!a && !b) || (a && b && !strcmp(a, b));
    -    +       int ret = test_assert(loc, check, ok);
    -    +
    -    +       if (ret) {
    -    +               fflush(stderr);
    -    +               print_str("   left", a);
    -    +               print_str("  right", b);
    -    +               fflush(stdout);
    -    +       }
    -    +
    -    +       return ret;
    -    +}
    -    diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
    -    new file mode 100644
    -    index 0000000000..720c97c6f8
    -    --- /dev/null
    -    +++ b/t/unit-tests/test-lib.h
    -    @@ -0,0 +1,143 @@
    -    +#ifndef TEST_LIB_H
    -    +#define TEST_LIB_H
    -    +
    -    +#include "git-compat-util.h"
    -    +
    -    +/*
    -    + * Run a test function, returns 0 if the test succeeds, -1 if it
    -    + * fails. If test_skip_all() has been called then the test will not be
    -    + * run. The description for each test should be unique. For example:
    -    + *
    -    + *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
    -    + */
    -    +#define TEST(t, ...)                                   \
    -    +       test__run_end(test__run_begin() ? 0 : (t, 1),   \
    -    +                     TEST_LOCATION(),  __VA_ARGS__)
    -    +
    -    +/*
    -    + * Print a test plan, should be called before any tests. If the number
    -    + * of tests is not known in advance test_done() will automatically
    -    + * print a plan at the end of the test program.
    -    + */
    -    +void test_plan(int count);
    -    +
    -    +/*
    -    + * test_done() must be called at the end of main(). It will print the
    -    + * plan if plan() was not called at the beginning of the test program
    -    + * and returns the exit code for the test program.
    -    + */
    -    +int test_done(void);
    -    +
    -    +/* Skip the current test. */
    -    +__attribute__((format (printf, 1, 2)))
    -    +void test_skip(const char *format, ...);
    -    +
    -    +/* Skip all remaining tests. */
    -    +__attribute__((format (printf, 1, 2)))
    -    +void test_skip_all(const char *format, ...);
    -    +
    -    +/* Print a diagnostic message to stdout. */
    -    +__attribute__((format (printf, 1, 2)))
    -    +void test_msg(const char *format, ...);
    -    +
    -    +/*
    -    + * Test checks are built around test_assert(). checks return 0 on
    -    + * success, -1 on failure. If any check fails then the test will
    -    + * fail. To create a custom check define a function that wraps
    -    + * test_assert() and a macro to wrap that function. For example:
    -    + *
    -    + *  static int check_oid_loc(const char *loc, const char *check,
    -    + *                          struct object_id *a, struct object_id *b)
    -    + *  {
    -    + *         int res = test_assert(loc, check, oideq(a, b));
    -    + *
    -    + *         if (res) {
    -    + *                 test_msg("   left: %s", oid_to_hex(a);
    -    + *                 test_msg("  right: %s", oid_to_hex(a);
    -    + *
    -    + *         }
    -    + *         return res;
    -    + *  }
    -    + *
    -    + *  #define check_oid(a, b) \
    -    + *         check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
    -    + */
    -    +int test_assert(const char *location, const char *check, int ok);
    -    +
    -    +/* Helper macro to pass the location to checks */
    -    +#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
    -    +
    -    +/* Check a boolean condition. */
    -    +#define check(x)                               \
    -    +       check_bool_loc(TEST_LOCATION(), #x, x)
    -    +int check_bool_loc(const char *loc, const char *check, int ok);
    -    +
    -    +/*
    -    + * Compare two integers. Prints a message with the two values if the
    -    + * comparison fails. NB this is not thread safe.
    -    + */
    -    +#define check_int(a, op, b)                                            \
    -    +       (test__tmp[0].i = (a), test__tmp[1].i = (b),                    \
    -    +        check_int_loc(TEST_LOCATION(), #a" "#op" "#b,                  \
    -    +                      test__tmp[0].i op test__tmp[1].i, a, b))
    -    +int check_int_loc(const char *loc, const char *check, int ok,
    -    +                 intmax_t a, intmax_t b);
    -    +
    -    +/*
    -    + * Compare two unsigned integers. Prints a message with the two values
    -    + * if the comparison fails. NB this is not thread safe.
    -    + */
    -    +#define check_uint(a, op, b)                                           \
    -    +       (test__tmp[0].u = (a), test__tmp[1].u = (b),                    \
    -    +        check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,                 \
    -    +                       test__tmp[0].u op test__tmp[1].u, a, b))
    -    +int check_uint_loc(const char *loc, const char *check, int ok,
    -    +                  uintmax_t a, uintmax_t b);
    -    +
    -    +/*
    -    + * Compare two chars. Prints a message with the two values if the
    -    + * comparison fails. NB this is not thread safe.
    -    + */
    -    +#define check_char(a, op, b)                                           \
    -    +       (test__tmp[0].c = (a), test__tmp[1].c = (b),                    \
    -    +        check_char_loc(TEST_LOCATION(), #a" "#op" "#b,                 \
    -    +                       test__tmp[0].c op test__tmp[1].c, a, b))
    -    +int check_char_loc(const char *loc, const char *check, int ok,
    -    +                  char a, char b);
    -    +
    -    +/* Check whether two strings are equal. */
    -    +#define check_str(a, b)                                                        \
    -    +       check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
    -    +int check_str_loc(const char *loc, const char *check,
    -    +                 const char *a, const char *b);
    -    +
    -    +/*
    -    + * Wrap a check that is known to fail. If the check succeeds then the
    -    + * test will fail. Returns 0 if the check fails, -1 if it
    -    + * succeeds. For example:
    -    + *
    -    + *  TEST_TODO(check(0));
    -    + */
    -    +#define TEST_TODO(check) \
    -    +       (test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
    -    +
    -    +/* Private helpers */
    -    +
    -    +#define TEST__STR(x) #x
    -    +#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
    -    +
    -    +union test__tmp {
    -    +       intmax_t i;
    -    +       uintmax_t u;
    -    +       char c;
    -    +};
    -    +
    -    +extern union test__tmp test__tmp[2];
    -    +
    -    +int test__run_begin(void);
    -    +__attribute__((format (printf, 3, 4)))
    -    +int test__run_end(int, const char *, const char *, ...);
    -    +void test__todo_begin(void);
    -    +int test__todo_end(const char *, const char *, int);
    -    +
    -    +#endif /* TEST_LIB_H */
    -
      ## Makefile ##
     @@ Makefile: TEST_BUILTINS_OBJS =
      TEST_OBJS =
2:  ea33518d00 = 3:  abf4dc41ac ci: run unit tests in CI

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.42.0.rc1.204.g551eb34607-goog


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v7 1/3] unit tests: Add a project plan document
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
@ 2023-08-17 18:37     ` Josh Steadmon
  2023-08-17 18:37     ` [PATCH v7 2/3] unit tests: add TAP unit test framework Josh Steadmon
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-17 18:37 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
preliminary comparison of several different frameworks.

Co-authored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 220 +++++++++++++++++++++++++
 2 files changed, 221 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..b7a89cc838
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,220 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+For now, we will evaluate projects solely on their framework features. Since we
+are relying on having TAP output (see below), we can assume that any framework
+can be made to work with a harness that we can choose later.
+
+
+== Choosing a framework
+
+We believe the best option is to implement a custom TAP framework for the Git
+project. We use a version of the framework originally proposed in
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
+
+
+== Choosing a test harness
+
+During upstream discussion, it was occasionally noted that `prove` provides many
+convenient features, such as scheduling slower tests first, or re-running
+previously failed tests.
+
+While we already support the use of `prove` as a test harness for the shell
+tests, it is not strictly required. The t/Makefile allows running shell tests
+directly (though with interleaved output if parallelism is enabled). Git
+developers who wish to use `prove` as a more advanced harness can do so by
+setting DEFAULT_TEST_TARGET=prove in their config.mak.
+
+We will follow a similar approach for unit tests: by default the test
+executables will be run directly from the t/Makefile, but `prove` can be
+configured with DEFAULT_UNIT_TEST_TARGET=prove.
+
+
+== Framework selection
+
+There are a variety of features we can use to rank the candidate frameworks, and
+those features have different priorities:
+
+* Critical features: we probably won't consider a framework without these
+** Can we legally / easily use the project?
+*** <<license,License>>
+*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
+*** <<maintainable-extensible,Maintainable / extensible>>
+*** <<major-platform-support,Major platform support>>
+** Does the project support our bare-minimum needs?
+*** <<tap-support,TAP support>>
+*** <<diagnostic-output,Diagnostic output>>
+*** <<runtime-skippable-tests,Runtime-skippable tests>>
+* Nice-to-have features:
+** <<parallel-execution,Parallel execution>>
+** <<mock-support,Mock support>>
+** <<signal-error-handling,Signal & error-handling>>
+* Tie-breaker stats
+** <<project-kloc,Project KLOC>>
+** <<adoption,Adoption>>
+
+[[license]]
+=== License
+
+We must be able to legally use the framework in connection with Git. As Git is
+licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
+projects.
+
+[[vendorable-or-ubiquitous]]
+=== Vendorable or ubiquitous
+
+We want to avoid forcing Git developers to install new tools just to run unit
+tests. Any prospective frameworks and harnesses must either be vendorable
+(meaning, we can copy their source directly into Git's repository), or so
+ubiquitous that it is reasonable to expect that most developers will have the
+tools installed already.
+
+[[maintainable-extensible]]
+=== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+=== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[tap-support]]
+=== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+=== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[runtime-skippable-tests]]
+=== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[parallel-execution]]
+=== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. We assume that executable-level
+parallelism can be handled by the test harness.
+
+[[mock-support]]
+=== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+=== Signal & error handling
+
+The test framework should fail gracefully when test cases are themselves buggy
+or when they are interrupted by signals during runtime.
+
+[[project-kloc]]
+=== Project KLOC
+
+The size of the project, in thousands of lines of code as measured by
+https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
+1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+=== Adoption
+
+As a tie-breaker, we prefer a more widely-used project. We use the number of
+GitHub / GitLab stars to estimate this.
+
+
+=== Comparison
+
+[format="csv",options="header",width="33%"]
+|=====
+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
+https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
+https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
+|=====
+
+=== Additional framework candidates
+
+Several suggested frameworks have been eliminated from consideration:
+
+* Incompatible licenses:
+** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
+** https://cmocka.org/[cmocka] (Apache 2.0)
+* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
+* No TAP support:
+** https://nemequ.github.io/munit/[µnit]
+** https://github.com/google/cmockery[cmockery]
+** https://github.com/lpabon/cmockery2[cmockery2]
+** https://github.com/ThrowTheSwitch/Unity[Unity]
+** https://github.com/siu/minunit[minunit]
+** https://cunit.sourceforge.net/[CUnit]
+
+
+== Milestones
+
+* Add useful tests of library-like code
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target
-- 
2.42.0.rc1.204.g551eb34607-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
  2023-08-17 18:37     ` [PATCH v7 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-08-17 18:37     ` Josh Steadmon
  2023-08-18  0:12       ` Junio C Hamano
  2023-08-17 18:37     ` [PATCH v7 3/3] ci: run unit tests in CI Josh Steadmon
                       ` (2 subsequent siblings)
  4 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-08-17 18:37 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

From: Phillip Wood <phillip.wood@dunelm.org.uk>

This patch contains an implementation for writing unit tests with TAP
output. Each test is a function that contains one or more checks. The
test is run with the TEST() macro and if any of the checks fail then the
test will fail. A complete program that tests STRBUF_INIT would look
like

     #include "test-lib.h"
     #include "strbuf.h"

     static void t_static_init(void)
     {
             struct strbuf buf = STRBUF_INIT;

             check_uint(buf.len, ==, 0);
             check_uint(buf.alloc, ==, 0);
             if (check(buf.buf == strbuf_slopbuf))
		    return; /* avoid SIGSEV */
             check_char(buf.buf[0], ==, '\0');
     }

     int main(void)
     {
             TEST(t_static_init(), "static initialization works);

             return test_done();
     }

The output of this program would be

     ok 1 - static initialization works
     1..1

If any of the checks in a test fail then they print a diagnostic message
to aid debugging and the test will be reported as failing. For example a
failing integer check would look like

     # check "x >= 3" failed at my-test.c:102
     #    left: 2
     #   right: 3
     not ok 1 - x is greater than or equal to three

There are a number of check functions implemented so far. check() checks
a boolean condition, check_int(), check_uint() and check_char() take two
values to compare and a comparison operator. check_str() will check if
two strings are equal. Custom checks are simple to implement as shown in
the comments above test_assert() in test-lib.h.

Tests can be skipped with test_skip() which can be supplied with a
reason for skipping which it will print. Tests can print diagnostic
messages with test_msg().  Checks that are known to fail can be wrapped
in TEST_TODO().

There are a couple of example test programs included in this
patch. t-basic.c implements some self-tests and demonstrates the
diagnostic output for failing test. The output of this program is
checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
unit tests for strbuf.c

The unit tests can be built with "make unit-tests" (this works but the
Makefile changes need some further work). Once they have been built they
can be run manually (e.g t/unit-tests/t-strbuf) or with prove.

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Makefile                    |  24 ++-
 t/Makefile                  |  15 +-
 t/t0080-unit-test-output.sh |  58 +++++++
 t/unit-tests/.gitignore     |   2 +
 t/unit-tests/t-basic.c      |  95 +++++++++++
 t/unit-tests/t-strbuf.c     |  75 ++++++++
 t/unit-tests/test-lib.c     | 329 ++++++++++++++++++++++++++++++++++++
 t/unit-tests/test-lib.h     | 143 ++++++++++++++++
 8 files changed, 737 insertions(+), 4 deletions(-)
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

diff --git a/Makefile b/Makefile
index e440728c24..4016da6e39 100644
--- a/Makefile
+++ b/Makefile
@@ -682,6 +682,8 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests
 
 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1331,6 +1333,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
 THIRD_PARTY_SOURCES += sha1collisiondetection/%
 THIRD_PARTY_SOURCES += sha1dc/%
 
+UNIT_TEST_PROGRAMS += t-basic
+UNIT_TEST_PROGRAMS += t-strbuf
+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_DIR)/%$X,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
+
 # xdiff and reftable libs may in turn depend on what is in libgit.a
 GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
 EXTLIBS =
@@ -2672,6 +2680,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(UNIT_TEST_OBJS)
 
 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3167,7 +3176,7 @@ endif
 
 test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
 
-all:: $(TEST_PROGRAMS) $(test_bindir_programs)
+all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
 
 bin-wrappers/%: wrap-for-bin.sh
 	$(call mkdir_p_parent_template)
@@ -3592,7 +3601,7 @@ endif
 
 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
-		$(MOFILES)
+		$(UNIT_TEST_PROGS) $(MOFILES)
 	$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
 		SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
 	test -n "$(ARTIFACTS_DIRECTORY)"
@@ -3653,7 +3662,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3831,3 +3840,12 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
 
 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_PROGS): $(UNIT_TEST_DIR)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS
+	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGS)
+unit-tests: $(UNIT_TEST_PROGS)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..2db8b3adb1 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -17,6 +17,7 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+DEFAULT_UNIT_TEST_TARGET ?= unit-tests-raw
 TEST_LINT ?= test-lint
 
 ifdef TEST_OUTPUT_DIRECTORY
@@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +67,17 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
 
+$(UNIT_TESTS):
+	@echo "*** $@ ***"; $@
+
+.PHONY: unit-tests unit-tests-raw unit-tests-prove
+unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
+
+unit-tests-raw: $(UNIT_TESTS)
+
+unit-tests-prove:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
@@ -149,4 +162,4 @@ perf:
 	$(MAKE) -C perf/ all
 
 .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
-	check-chainlint clean-chainlint test-chainlint
+	check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
new file mode 100755
index 0000000000..e0fc07d1e5
--- /dev/null
+++ b/t/t0080-unit-test-output.sh
@@ -0,0 +1,58 @@
+#!/bin/sh
+
+test_description='Test the output of the unit test framework'
+
+. ./test-lib.sh
+
+test_expect_success 'TAP output from unit tests' '
+	cat >expect <<-EOF &&
+	ok 1 - passing test
+	ok 2 - passing test and assertion return 0
+	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
+	#    left: 1
+	#   right: 2
+	not ok 3 - failing test
+	ok 4 - failing test and assertion return -1
+	not ok 5 - passing TEST_TODO() # TODO
+	ok 6 - passing TEST_TODO() returns 0
+	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
+	not ok 7 - failing TEST_TODO()
+	ok 8 - failing TEST_TODO() returns -1
+	# check "0" failed at t/unit-tests/t-basic.c:30
+	# skipping test - missing prerequisite
+	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
+	ok 9 - test_skip() # SKIP
+	ok 10 - skipped test returns 0
+	# skipping test - missing prerequisite
+	ok 11 - test_skip() inside TEST_TODO() # SKIP
+	ok 12 - test_skip() inside TEST_TODO() returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:48
+	not ok 13 - TEST_TODO() after failing check
+	ok 14 - TEST_TODO() after failing check returns -1
+	# check "0" failed at t/unit-tests/t-basic.c:56
+	not ok 15 - failing check after TEST_TODO()
+	ok 16 - failing check after TEST_TODO() returns -1
+	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
+	#    left: "\011hello\\\\"
+	#   right: "there\"\012"
+	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
+	#    left: "NULL"
+	#   right: NULL
+	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
+	#    left: ${SQ}a${SQ}
+	#   right: ${SQ}\012${SQ}
+	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
+	#    left: ${SQ}\\\\${SQ}
+	#   right: ${SQ}\\${SQ}${SQ}
+	not ok 17 - messages from failing string and char comparison
+	# BUG: test has no checks at t/unit-tests/t-basic.c:91
+	not ok 18 - test with no checks
+	ok 19 - test with no checks returns -1
+	1..19
+	EOF
+
+	! "$GIT_BUILD_DIR"/t/unit-tests/t-basic >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
new file mode 100644
index 0000000000..e292d58348
--- /dev/null
+++ b/t/unit-tests/.gitignore
@@ -0,0 +1,2 @@
+/t-basic
+/t-strbuf
diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
new file mode 100644
index 0000000000..d20f444fab
--- /dev/null
+++ b/t/unit-tests/t-basic.c
@@ -0,0 +1,95 @@
+#include "test-lib.h"
+
+/*
+ * The purpose of this "unit test" is to verify a few invariants of the unit
+ * test framework itself, as well as to provide examples of output from actually
+ * failing tests. As such, it is intended that this test fails, and thus it
+ * should not be run as part of `make unit-tests`. Instead, we verify it behaves
+ * as expected in the integration test t0080-unit-test-output.sh
+ */
+
+/* Used to store the return value of check_int(). */
+static int check_res;
+
+/* Used to store the return value of TEST(). */
+static int test_res;
+
+static void t_res(int expect)
+{
+	check_int(check_res, ==, expect);
+	check_int(test_res, ==, expect);
+}
+
+static void t_todo(int x)
+{
+	check_res = TEST_TODO(check(x));
+}
+
+static void t_skip(void)
+{
+	check(0);
+	test_skip("missing prerequisite");
+	check(1);
+}
+
+static int do_skip(void)
+{
+	test_skip("missing prerequisite");
+	return 0;
+}
+
+static void t_skip_todo(void)
+{
+	check_res = TEST_TODO(do_skip());
+}
+
+static void t_todo_after_fail(void)
+{
+	check(0);
+	TEST_TODO(check(0));
+}
+
+static void t_fail_after_todo(void)
+{
+	check(1);
+	TEST_TODO(check(0));
+	check(0);
+}
+
+static void t_messages(void)
+{
+	check_str("\thello\\", "there\"\n");
+	check_str("NULL", NULL);
+	check_char('a', ==, '\n');
+	check_char('\\', ==, '\'');
+}
+
+static void t_empty(void)
+{
+	; /* empty */
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
+	TEST(t_res(0), "passing test and assertion return 0");
+	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
+	TEST(t_res(-1), "failing test and assertion return -1");
+	test_res = TEST(t_todo(0), "passing TEST_TODO()");
+	TEST(t_res(0), "passing TEST_TODO() returns 0");
+	test_res = TEST(t_todo(1), "failing TEST_TODO()");
+	TEST(t_res(-1), "failing TEST_TODO() returns -1");
+	test_res = TEST(t_skip(), "test_skip()");
+	TEST(check_int(test_res, ==, 0), "skipped test returns 0");
+	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
+	TEST(t_res(0), "test_skip() inside TEST_TODO() returns 0");
+	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
+	TEST(check_int(test_res, ==, -1), "TEST_TODO() after failing check returns -1");
+	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
+	TEST(check_int(test_res, ==, -1), "failing check after TEST_TODO() returns -1");
+	TEST(t_messages(), "messages from failing string and char comparison");
+	test_res = TEST(t_empty(), "test with no checks");
+	TEST(check_int(test_res, ==, -1), "test with no checks returns -1");
+
+	return test_done();
+}
diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
new file mode 100644
index 0000000000..561611e242
--- /dev/null
+++ b/t/unit-tests/t-strbuf.c
@@ -0,0 +1,75 @@
+#include "test-lib.h"
+#include "strbuf.h"
+
+/* wrapper that supplies tests with an initialized strbuf */
+static void setup(void (*f)(struct strbuf*, void*), void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	check(buf.buf == strbuf_slopbuf);
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_static_init(void)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	if (check(buf.buf == strbuf_slopbuf))
+		return; /* avoid de-referencing buf.buf */
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_dynamic_init(void)
+{
+	struct strbuf buf;
+
+	strbuf_init(&buf, 1024);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, >=, 1024);
+	check_char(buf.buf[0], ==, '\0');
+	strbuf_release(&buf);
+}
+
+static void t_addch(struct strbuf *buf, void *data)
+{
+	const char *p_ch = data;
+	const char ch = *p_ch;
+
+	strbuf_addch(buf, ch);
+	if (check_uint(buf->len, ==, 1) ||
+	    check_uint(buf->alloc, >, 1))
+		return; /* avoid de-referencing buf->buf */
+	check_char(buf->buf[0], ==, ch);
+	check_char(buf->buf[1], ==, '\0');
+}
+
+static void t_addstr(struct strbuf *buf, void *data)
+{
+	const char *text = data;
+	size_t len = strlen(text);
+
+	strbuf_addstr(buf, text);
+	if (check_uint(buf->len, ==, len) ||
+	    check_uint(buf->alloc, >, len) ||
+	    check_char(buf->buf[len], ==, '\0'))
+	    return;
+	check_str(buf->buf, text);
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	if (TEST(t_static_init(), "static initialization works"))
+		test_skip_all("STRBUF_INIT is broken");
+	TEST(t_dynamic_init(), "dynamic initialization works");
+	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
+	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
+	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
+
+	return test_done();
+}
diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
new file mode 100644
index 0000000000..70030d587f
--- /dev/null
+++ b/t/unit-tests/test-lib.c
@@ -0,0 +1,329 @@
+#include "test-lib.h"
+
+enum result {
+	RESULT_NONE,
+	RESULT_FAILURE,
+	RESULT_SKIP,
+	RESULT_SUCCESS,
+	RESULT_TODO
+};
+
+static struct {
+	enum result result;
+	int count;
+	unsigned failed :1;
+	unsigned lazy_plan :1;
+	unsigned running :1;
+	unsigned skip_all :1;
+	unsigned todo :1;
+} ctx = {
+	.lazy_plan = 1,
+	.result = RESULT_NONE,
+};
+
+static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
+{
+	fflush(stderr);
+	if (prefix)
+		fprintf(stdout, "%s", prefix);
+	vprintf(format, ap); /* TODO: handle newlines */
+	putc('\n', stdout);
+	fflush(stdout);
+}
+
+void test_msg(const char *format, ...)
+{
+	va_list ap;
+
+	va_start(ap, format);
+	msg_with_prefix("# ", format, ap);
+	va_end(ap);
+}
+
+void test_plan(int count)
+{
+	assert(!ctx.running);
+
+	fflush(stderr);
+	printf("1..%d\n", count);
+	fflush(stdout);
+	ctx.lazy_plan = 0;
+}
+
+int test_done(void)
+{
+	assert(!ctx.running);
+
+	if (ctx.lazy_plan)
+		test_plan(ctx.count);
+
+	return ctx.failed;
+}
+
+void test_skip(const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	if (format)
+		msg_with_prefix("# skipping test - ", format, ap);
+	va_end(ap);
+}
+
+void test_skip_all(const char *format, ...)
+{
+	va_list ap;
+	const char *prefix;
+
+	if (!ctx.count && ctx.lazy_plan) {
+		/* We have not printed a test plan yet */
+		prefix = "1..0 # SKIP ";
+		ctx.lazy_plan = 0;
+	} else {
+		/* We have already printed a test plan */
+		prefix = "Bail out! # ";
+		ctx.failed = 1;
+	}
+	ctx.skip_all = 1;
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	msg_with_prefix(prefix, format, ap);
+	va_end(ap);
+}
+
+int test__run_begin(void)
+{
+	assert(!ctx.running);
+
+	ctx.count++;
+	ctx.result = RESULT_NONE;
+	ctx.running = 1;
+
+	return ctx.skip_all;
+}
+
+static void print_description(const char *format, va_list ap)
+{
+	if (format) {
+		fputs(" - ", stdout);
+		vprintf(format, ap);
+	}
+}
+
+int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	fflush(stderr);
+	va_start(ap, format);
+	if (!ctx.skip_all) {
+		switch (ctx.result) {
+		case RESULT_SUCCESS:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_FAILURE:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_TODO:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # TODO");
+			break;
+
+		case RESULT_SKIP:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # SKIP");
+			break;
+
+		case RESULT_NONE:
+			test_msg("BUG: test has no checks at %s", location);
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			ctx.result = RESULT_FAILURE;
+			break;
+		}
+	}
+	va_end(ap);
+	ctx.running = 0;
+	if (ctx.skip_all)
+		return 0;
+	putc('\n', stdout);
+	fflush(stdout);
+	ctx.failed |= ctx.result == RESULT_FAILURE;
+
+	return -(ctx.result == RESULT_FAILURE);
+}
+
+static void test_fail(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	ctx.result = RESULT_FAILURE;
+}
+
+static void test_pass(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result == RESULT_NONE)
+		ctx.result = RESULT_SUCCESS;
+}
+
+static void test_todo(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result != RESULT_FAILURE)
+		ctx.result = RESULT_TODO;
+}
+
+int test_assert(const char *location, const char *check, int ok)
+{
+	assert(ctx.running);
+
+	if (ctx.result == RESULT_SKIP) {
+		test_msg("skipping check '%s' at %s", check, location);
+		return 0;
+	} else if (!ctx.todo) {
+		if (ok) {
+			test_pass();
+		} else {
+			test_msg("check \"%s\" failed at %s", check, location);
+			test_fail();
+		}
+	}
+
+	return -!ok;
+}
+
+void test__todo_begin(void)
+{
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	ctx.todo = 1;
+}
+
+int test__todo_end(const char *location, const char *check, int res)
+{
+	assert(ctx.running);
+	assert(ctx.todo);
+
+	ctx.todo = 0;
+	if (ctx.result == RESULT_SKIP)
+		return 0;
+	if (!res) {
+		test_msg("todo check '%s' succeeded at %s", check, location);
+		test_fail();
+	} else {
+		test_todo();
+	}
+
+	return -!res;
+}
+
+int check_bool_loc(const char *loc, const char *check, int ok)
+{
+	return test_assert(loc, check, ok);
+}
+
+union test__tmp test__tmp[2];
+
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		test_msg("   left: %"PRIdMAX, a);
+		test_msg("  right: %"PRIdMAX, b);
+	}
+
+	return ret;
+}
+
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		test_msg("   left: %"PRIuMAX, a);
+		test_msg("  right: %"PRIuMAX, b);
+	}
+
+	return ret;
+}
+
+static void print_one_char(char ch, char quote)
+{
+	if ((unsigned char)ch < 0x20u || ch == 0x7f) {
+		/* TODO: improve handling of \a, \b, \f ... */
+		printf("\\%03o", (unsigned char)ch);
+	} else {
+		if (ch == '\\' || ch == quote)
+			putc('\\', stdout);
+		putc(ch, stdout);
+	}
+}
+
+static void print_char(const char *prefix, char ch)
+{
+	printf("# %s: '", prefix);
+	print_one_char(ch, '\'');
+	fputs("'\n", stdout);
+}
+
+int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		fflush(stderr);
+		print_char("   left", a);
+		print_char("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
+
+static void print_str(const char *prefix, const char *str)
+{
+	printf("# %s: ", prefix);
+	if (!str) {
+		fputs("NULL\n", stdout);
+	} else {
+		putc('"', stdout);
+		while (*str)
+			print_one_char(*str++, '"');
+		fputs("\"\n", stdout);
+	}
+}
+
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b)
+{
+	int ok = (!a && !b) || (a && b && !strcmp(a, b));
+	int ret = test_assert(loc, check, ok);
+
+	if (ret) {
+		fflush(stderr);
+		print_str("   left", a);
+		print_str("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
new file mode 100644
index 0000000000..720c97c6f8
--- /dev/null
+++ b/t/unit-tests/test-lib.h
@@ -0,0 +1,143 @@
+#ifndef TEST_LIB_H
+#define TEST_LIB_H
+
+#include "git-compat-util.h"
+
+/*
+ * Run a test function, returns 0 if the test succeeds, -1 if it
+ * fails. If test_skip_all() has been called then the test will not be
+ * run. The description for each test should be unique. For example:
+ *
+ *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
+ */
+#define TEST(t, ...)					\
+	test__run_end(test__run_begin() ? 0 : (t, 1),	\
+		      TEST_LOCATION(),  __VA_ARGS__)
+
+/*
+ * Print a test plan, should be called before any tests. If the number
+ * of tests is not known in advance test_done() will automatically
+ * print a plan at the end of the test program.
+ */
+void test_plan(int count);
+
+/*
+ * test_done() must be called at the end of main(). It will print the
+ * plan if plan() was not called at the beginning of the test program
+ * and returns the exit code for the test program.
+ */
+int test_done(void);
+
+/* Skip the current test. */
+__attribute__((format (printf, 1, 2)))
+void test_skip(const char *format, ...);
+
+/* Skip all remaining tests. */
+__attribute__((format (printf, 1, 2)))
+void test_skip_all(const char *format, ...);
+
+/* Print a diagnostic message to stdout. */
+__attribute__((format (printf, 1, 2)))
+void test_msg(const char *format, ...);
+
+/*
+ * Test checks are built around test_assert(). checks return 0 on
+ * success, -1 on failure. If any check fails then the test will
+ * fail. To create a custom check define a function that wraps
+ * test_assert() and a macro to wrap that function. For example:
+ *
+ *  static int check_oid_loc(const char *loc, const char *check,
+ *			     struct object_id *a, struct object_id *b)
+ *  {
+ *	    int res = test_assert(loc, check, oideq(a, b));
+ *
+ *	    if (res) {
+ *		    test_msg("   left: %s", oid_to_hex(a);
+ *		    test_msg("  right: %s", oid_to_hex(a);
+ *
+ *	    }
+ *	    return res;
+ *  }
+ *
+ *  #define check_oid(a, b) \
+ *	    check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
+ */
+int test_assert(const char *location, const char *check, int ok);
+
+/* Helper macro to pass the location to checks */
+#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
+
+/* Check a boolean condition. */
+#define check(x)				\
+	check_bool_loc(TEST_LOCATION(), #x, x)
+int check_bool_loc(const char *loc, const char *check, int ok);
+
+/*
+ * Compare two integers. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_int(a, op, b)						\
+	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
+	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+		       test__tmp[0].i op test__tmp[1].i, a, b))
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b);
+
+/*
+ * Compare two unsigned integers. Prints a message with the two values
+ * if the comparison fails. NB this is not thread safe.
+ */
+#define check_uint(a, op, b)						\
+	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
+	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].u op test__tmp[1].u, a, b))
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b);
+
+/*
+ * Compare two chars. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_char(a, op, b)						\
+	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
+	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].c op test__tmp[1].c, a, b))
+int check_char_loc(const char *loc, const char *check, int ok,
+		   char a, char b);
+
+/* Check whether two strings are equal. */
+#define check_str(a, b)							\
+	check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b);
+
+/*
+ * Wrap a check that is known to fail. If the check succeeds then the
+ * test will fail. Returns 0 if the check fails, -1 if it
+ * succeeds. For example:
+ *
+ *  TEST_TODO(check(0));
+ */
+#define TEST_TODO(check) \
+	(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
+
+/* Private helpers */
+
+#define TEST__STR(x) #x
+#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
+
+union test__tmp {
+	intmax_t i;
+	uintmax_t u;
+	char c;
+};
+
+extern union test__tmp test__tmp[2];
+
+int test__run_begin(void);
+__attribute__((format (printf, 3, 4)))
+int test__run_end(int, const char *, const char *, ...);
+void test__todo_begin(void);
+int test__todo_end(const char *, const char *, int);
+
+#endif /* TEST_LIB_H */
-- 
2.42.0.rc1.204.g551eb34607-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v7 3/3] ci: run unit tests in CI
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
  2023-08-17 18:37     ` [PATCH v7 1/3] unit tests: Add a project plan document Josh Steadmon
  2023-08-17 18:37     ` [PATCH v7 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-08-17 18:37     ` Josh Steadmon
  2023-08-17 20:38     ` [PATCH v7 0/3] Add unit test framework and project plan Junio C Hamano
  2023-08-24 20:11     ` Josh Steadmon
  4 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-08-17 18:37 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

Run unit tests in both Cirrus and GitHub CI. For sharded CI instances
(currently just Windows on GitHub), run only on the first shard. This is
OK while we have only a single unit test executable, but we may wish to
distribute tests more evenly when we add new unit tests in the future.

We may also want to add more status output in our unit test framework,
so that we can do similar post-processing as in
ci/lib.sh:handle_failed_tests().

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 .cirrus.yml               | 2 +-
 ci/run-build-and-tests.sh | 2 ++
 ci/run-test-slice.sh      | 5 +++++
 t/Makefile                | 2 +-
 4 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 4860bebd32..b6280692d2 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -19,4 +19,4 @@ freebsd_12_task:
   build_script:
     - su git -c gmake
   test_script:
-    - su git -c 'gmake test'
+    - su git -c 'gmake DEFAULT_UNIT_TEST_TARGET=unit-tests-prove test unit-tests'
diff --git a/ci/run-build-and-tests.sh b/ci/run-build-and-tests.sh
index 2528f25e31..7a1466b868 100755
--- a/ci/run-build-and-tests.sh
+++ b/ci/run-build-and-tests.sh
@@ -50,6 +50,8 @@ if test -n "$run_tests"
 then
 	group "Run tests" make test ||
 	handle_failed_tests
+	group "Run unit tests" \
+		make DEFAULT_UNIT_TEST_TARGET=unit-tests-prove unit-tests
 fi
 check_unignored_build_artifacts
 
diff --git a/ci/run-test-slice.sh b/ci/run-test-slice.sh
index a3c67956a8..ae8094382f 100755
--- a/ci/run-test-slice.sh
+++ b/ci/run-test-slice.sh
@@ -15,4 +15,9 @@ group "Run tests" make --quiet -C t T="$(cd t &&
 	tr '\n' ' ')" ||
 handle_failed_tests
 
+# We only have one unit test at the moment, so run it in the first slice
+if [ "$1" == "0" ] ; then
+	group "Run unit tests" make --quiet -C t unit-tests-prove
+fi
+
 check_unignored_build_artifacts
diff --git a/t/Makefile b/t/Makefile
index 2db8b3adb1..095334bfde 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -42,7 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
-UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/t-*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
-- 
2.42.0.rc1.204.g551eb34607-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 0/3] Add unit test framework and project plan
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
                       ` (2 preceding siblings ...)
  2023-08-17 18:37     ` [PATCH v7 3/3] ci: run unit tests in CI Josh Steadmon
@ 2023-08-17 20:38     ` Junio C Hamano
  2023-08-24 20:11     ` Josh Steadmon
  4 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-08-17 20:38 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, linusa, calvinwan, phillip.wood123, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> Changes in v7:
> - Fix corrupt diff in patch #2, sorry for the noise.

Thanks for resending.  Will queue.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-08-17 18:37     ` [PATCH v7 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-08-18  0:12       ` Junio C Hamano
  2023-09-22 20:05         ` Junio C Hamano
  2023-10-09 17:37         ` Josh Steadmon
  0 siblings, 2 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-08-18  0:12 UTC (permalink / raw)
  To: Josh Steadmon, phillip.wood123; +Cc: git, linusa, calvinwan, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> +test_expect_success 'TAP output from unit tests' '
> +	cat >expect <<-EOF &&
> +	ok 1 - passing test
> +	ok 2 - passing test and assertion return 0
> +	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
> +	#    left: 1
> +	#   right: 2
> +	not ok 3 - failing test
> +	ok 4 - failing test and assertion return -1
> +	not ok 5 - passing TEST_TODO() # TODO
> +	ok 6 - passing TEST_TODO() returns 0
> +	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
> +	not ok 7 - failing TEST_TODO()
> +	ok 8 - failing TEST_TODO() returns -1
> +	# check "0" failed at t/unit-tests/t-basic.c:30
> +	# skipping test - missing prerequisite
> +	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
> +	ok 9 - test_skip() # SKIP
> +	ok 10 - skipped test returns 0
> +	# skipping test - missing prerequisite
> +	ok 11 - test_skip() inside TEST_TODO() # SKIP
> +	ok 12 - test_skip() inside TEST_TODO() returns 0
> +	# check "0" failed at t/unit-tests/t-basic.c:48
> +	not ok 13 - TEST_TODO() after failing check
> +	ok 14 - TEST_TODO() after failing check returns -1
> +	# check "0" failed at t/unit-tests/t-basic.c:56
> +	not ok 15 - failing check after TEST_TODO()
> +	ok 16 - failing check after TEST_TODO() returns -1
> +	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
> +	#    left: "\011hello\\\\"
> +	#   right: "there\"\012"
> +	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
> +	#    left: "NULL"
> +	#   right: NULL
> +	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
> +	#    left: ${SQ}a${SQ}
> +	#   right: ${SQ}\012${SQ}
> +	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
> +	#    left: ${SQ}\\\\${SQ}
> +	#   right: ${SQ}\\${SQ}${SQ}
> +	not ok 17 - messages from failing string and char comparison
> +	# BUG: test has no checks at t/unit-tests/t-basic.c:91
> +	not ok 18 - test with no checks
> +	ok 19 - test with no checks returns -1
> +	1..19
> +	EOF

Presumably t-basic will serve as a catalog of check_* functions and
the test binary, together with this test piece, will keep growing as
we gain features in the unit tests infrastructure.  I wonder how
maintainable the above is, though.  When we acquire new test, we
would need to renumber.  What if multiple developers add new
features to the catalog at the same time?

> diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
> new file mode 100644
> index 0000000000..e292d58348
> --- /dev/null
> +++ b/t/unit-tests/.gitignore
> @@ -0,0 +1,2 @@
> +/t-basic
> +/t-strbuf

Also, can we come up with some naming convention so that we do not
have to keep adding to this file every time we add a new test
script?

> diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
> new file mode 100644
> index 0000000000..561611e242
> --- /dev/null
> +++ b/t/unit-tests/t-strbuf.c
> @@ -0,0 +1,75 @@
> +#include "test-lib.h"
> +#include "strbuf.h"
> +
> +/* wrapper that supplies tests with an initialized strbuf */
> +static void setup(void (*f)(struct strbuf*, void*), void *data)
> +{
> +	struct strbuf buf = STRBUF_INIT;
> +
> +	f(&buf, data);
> +	strbuf_release(&buf);
> +	check_uint(buf.len, ==, 0);
> +	check_uint(buf.alloc, ==, 0);
> +	check(buf.buf == strbuf_slopbuf);
> +	check_char(buf.buf[0], ==, '\0');
> +}

What I am going to utter from here on are not complaints but purely
meant as questions.  

Would the resulting output and maintainability of the tests change
(improve, or worsen) if we introduce

	static void assert_empty_strbuf(struct strbuf *buf)
	{
		check_uint(buf->len, ==, 0);
                check_uint(buf->alloc, ==, 0);
		check(buf.buf == strbuf_slopbuf);
		check_char(buf.buf[0], ==, '\0');
	}

and call it from the setup() function to ensure that
strbuf_release(&buf) it calls after running customer test f() brings
the buffer in a reasonably initialized state?  The t_static_init()
test should be able to say

	static void t_static_init(void)
	{
		struct strbuf buf = STRBUF_INIT;
		assert_empty_strbuf(&buf);
	}

if we did so, but is that a good thing or a bad thing (e.g. it may
make it harder to figure out where the real error came from, because
of the "line number" thing will not easily capture the caller of the
caller, perhaps)?  

> +static void t_static_init(void)
> +{
> +	struct strbuf buf = STRBUF_INIT;
> +
> +	check_uint(buf.len, ==, 0);
> +	check_uint(buf.alloc, ==, 0);
> +	if (check(buf.buf == strbuf_slopbuf))
> +		return; /* avoid de-referencing buf.buf */

strbuf_slopbuf[0] is designed to be readable.  Do check() assertions
return their parameter negated?

In other words, if "we expect buf.buf to point at the slopbuf, but
if that expectation does not hold, check() returns true and we
refrain from doing check_char() on the next line because we cannot
trust what buf.buf points at" is what is going on here, I find it
very confusing.  Perhaps my intuition is failing me, but somehow I
would have expected that passing check_foo() would return true while
failing ones would return false.

IOW I would expect

	if (check(buf.buf == strbuf_slopbuf))
		return;

to work very similarly to

	if (buf.buf == strbuf_slopbuf)
		return;

in expressing the control flow, simply because they are visually
similar.  But of course, if we early-return because buf.buf that
does not point at strbuf_slopbuf is a sign of trouble, then the
control flow we want is

	if (buf.buf != strbuf_slopbuf)
		return;

or

	if (!(buf.buf == strbuf_slopbuf))
		return;

The latter is easier to translate to check_foo(), because what is
inside the inner parentheses is the condition we expect, and we
would like check_foo() to complain when the condition does not hold.

For the "check_foo()" thing to work in a similar way, while having
the side effect of reporting any failed expectations, we would want
to write

	if (!check(buf.buf == strbuf_slopbuf))
		return;

And for that similarity to work, check_foo() must return false when
its expectation fails, and return true when its expectation holds.

I think that is where my "I find it very confusing" comes from.

> +	check_char(buf.buf[0], ==, '\0');
> +}

> +static void t_dynamic_init(void)
> +{
> +	struct strbuf buf;
> +
> +	strbuf_init(&buf, 1024);
> +	check_uint(buf.len, ==, 0);
> +	check_uint(buf.alloc, >=, 1024);
> +	check_char(buf.buf[0], ==, '\0');

Is it sensible to check buf.buf is not slopbuf at this point, or
does it make the test TOO intimate with the current implementation
detail?

> +	strbuf_release(&buf);
> +}
> +
> +static void t_addch(struct strbuf *buf, void *data)
> +{
> +	const char *p_ch = data;
> +	const char ch = *p_ch;
> +
> +	strbuf_addch(buf, ch);
> +	if (check_uint(buf->len, ==, 1) ||
> +	    check_uint(buf->alloc, >, 1))
> +		return; /* avoid de-referencing buf->buf */

Again, I find the return values from these check_uint() calls highly
confusing, if this is saying "if len is 1 and alloc is more than 1,
then we are in an expected state and can further validate that buf[0]
is ch and buf[1] is NULL, but otherwise we should punt".  The polarity
looks screwy.  Perhaps it is just me?

> +	check_char(buf->buf[0], ==, ch);
> +	check_char(buf->buf[1], ==, '\0');
> +}

In any case, this t_addch() REQUIRES that incoming buf is empty,
doesn't it?  I do not think it is sensible.  I would have expected
that it would be more like

	t_addch(struct strbuf *buf, void *data)
	{
		char ch = *(char *)data;
		size_t orig_alloc = buf->alloc;
		size_t orig_len = buf->len;

		if (!assert_sane_strbuf(buf))
			return;
                strbuf_addch(buf, ch);
		if (!assert_sane_strbuf(buf))
			return;
		check_uint(buf->len, ==, orig_len + 1);
		check_uint(buf->alloc, >=, orig_alloc);
                check_char(buf->buf[buf->len - 1], ==, ch);
                check_char(buf->buf[buf->len], ==, '\0');
	}

to ensure that we can add a ch to a strbuf with any existing
contents and get a one-byte longer contents than before, with the
last byte of the buffer becoming 'ch' and still NUL terminated.

And we protect ourselves with a helper that checks if the given
strbuf looks *sane*.

	static int assert_sane_strbuf(struct strbuf *buf)
	{
        	/* can use slopbuf only when the length is 0 */
		if (buf->buf == strbuf_slopbuf)
                	return (buf->len == 0);
		/* everybody else must have non-NULL buffer */
		if (buf->buf == NULL)
			return 0;
                /* 
		 * alloc must be at least 1 byte larger than len
		 * for the terminating NUL at the end.
		 */
		return ((buf->len + 1 <= buf->alloc) &&
		    	(buf->buf[buf->len] == '\0'));
	}

You can obviously use your check_foo() for the individual checks
done in this function to get a more detailed diagnosis, but because
I have confused myself enough by thinking about their polarity, I
wrote this in barebones comparison instead.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 0/3] Add unit test framework and project plan
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
                       ` (3 preceding siblings ...)
  2023-08-17 20:38     ` [PATCH v7 0/3] Add unit test framework and project plan Junio C Hamano
@ 2023-08-24 20:11     ` Josh Steadmon
  2023-09-13 18:14       ` Junio C Hamano
  4 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-08-24 20:11 UTC (permalink / raw)
  To: git; +Cc: linusa, calvinwan, phillip.wood123, gitster, rsbecker

BTW, I'm going to be AFK for a couple weeks, so it will be a while
before I'm able to address feedback on this series. Thanks in advance.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 0/3] Add unit test framework and project plan
  2023-08-24 20:11     ` Josh Steadmon
@ 2023-09-13 18:14       ` Junio C Hamano
  0 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-09-13 18:14 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, linusa, calvinwan, phillip.wood123, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> BTW, I'm going to be AFK for a couple weeks, so it will be a while
> before I'm able to address feedback on this series. Thanks in advance.

OK.  Enjoy your time off.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-08-18  0:12       ` Junio C Hamano
@ 2023-09-22 20:05         ` Junio C Hamano
  2023-09-24 13:57           ` phillip.wood123
  2023-10-09 17:37         ` Josh Steadmon
  1 sibling, 1 reply; 67+ messages in thread
From: Junio C Hamano @ 2023-09-22 20:05 UTC (permalink / raw)
  To: phillip.wood123; +Cc: Josh Steadmon, git, linusa, calvinwan, rsbecker

It seems this got stuck during Josh's absense and I didn't ping it
further, but I should have noticed that you are the author of this
patch, and pinged you in the meantime.

Any thought on the "polarity" of the return values from the
assertion?  I still find it confusing and hard to follow.

Thanks.

> Josh Steadmon <steadmon@google.com> writes:
>
>> +test_expect_success 'TAP output from unit tests' '
>> +	cat >expect <<-EOF &&
>> +	ok 1 - passing test
>> +	ok 2 - passing test and assertion return 0
>> +	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
>> +	#    left: 1
>> +	#   right: 2
>> +	not ok 3 - failing test
>> +	ok 4 - failing test and assertion return -1
>> +	not ok 5 - passing TEST_TODO() # TODO
>> +	ok 6 - passing TEST_TODO() returns 0
>> +	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
>> +	not ok 7 - failing TEST_TODO()
>> +	ok 8 - failing TEST_TODO() returns -1
>> +	# check "0" failed at t/unit-tests/t-basic.c:30
>> +	# skipping test - missing prerequisite
>> +	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
>> +	ok 9 - test_skip() # SKIP
>> +	ok 10 - skipped test returns 0
>> +	# skipping test - missing prerequisite
>> +	ok 11 - test_skip() inside TEST_TODO() # SKIP
>> +	ok 12 - test_skip() inside TEST_TODO() returns 0
>> +	# check "0" failed at t/unit-tests/t-basic.c:48
>> +	not ok 13 - TEST_TODO() after failing check
>> +	ok 14 - TEST_TODO() after failing check returns -1
>> +	# check "0" failed at t/unit-tests/t-basic.c:56
>> +	not ok 15 - failing check after TEST_TODO()
>> +	ok 16 - failing check after TEST_TODO() returns -1
>> +	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
>> +	#    left: "\011hello\\\\"
>> +	#   right: "there\"\012"
>> +	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
>> +	#    left: "NULL"
>> +	#   right: NULL
>> +	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
>> +	#    left: ${SQ}a${SQ}
>> +	#   right: ${SQ}\012${SQ}
>> +	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
>> +	#    left: ${SQ}\\\\${SQ}
>> +	#   right: ${SQ}\\${SQ}${SQ}
>> +	not ok 17 - messages from failing string and char comparison
>> +	# BUG: test has no checks at t/unit-tests/t-basic.c:91
>> +	not ok 18 - test with no checks
>> +	ok 19 - test with no checks returns -1
>> +	1..19
>> +	EOF
>
> Presumably t-basic will serve as a catalog of check_* functions and
> the test binary, together with this test piece, will keep growing as
> we gain features in the unit tests infrastructure.  I wonder how
> maintainable the above is, though.  When we acquire new test, we
> would need to renumber.  What if multiple developers add new
> features to the catalog at the same time?
>
>> diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
>> new file mode 100644
>> index 0000000000..e292d58348
>> --- /dev/null
>> +++ b/t/unit-tests/.gitignore
>> @@ -0,0 +1,2 @@
>> +/t-basic
>> +/t-strbuf
>
> Also, can we come up with some naming convention so that we do not
> have to keep adding to this file every time we add a new test
> script?
>
>> diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
>> new file mode 100644
>> index 0000000000..561611e242
>> --- /dev/null
>> +++ b/t/unit-tests/t-strbuf.c
>> @@ -0,0 +1,75 @@
>> +#include "test-lib.h"
>> +#include "strbuf.h"
>> +
>> +/* wrapper that supplies tests with an initialized strbuf */
>> +static void setup(void (*f)(struct strbuf*, void*), void *data)
>> +{
>> +	struct strbuf buf = STRBUF_INIT;
>> +
>> +	f(&buf, data);
>> +	strbuf_release(&buf);
>> +	check_uint(buf.len, ==, 0);
>> +	check_uint(buf.alloc, ==, 0);
>> +	check(buf.buf == strbuf_slopbuf);
>> +	check_char(buf.buf[0], ==, '\0');
>> +}
>
> What I am going to utter from here on are not complaints but purely
> meant as questions.  
>
> Would the resulting output and maintainability of the tests change
> (improve, or worsen) if we introduce
>
> 	static void assert_empty_strbuf(struct strbuf *buf)
> 	{
> 		check_uint(buf->len, ==, 0);
>                 check_uint(buf->alloc, ==, 0);
> 		check(buf.buf == strbuf_slopbuf);
> 		check_char(buf.buf[0], ==, '\0');
> 	}
>
> and call it from the setup() function to ensure that
> strbuf_release(&buf) it calls after running customer test f() brings
> the buffer in a reasonably initialized state?  The t_static_init()
> test should be able to say
>
> 	static void t_static_init(void)
> 	{
> 		struct strbuf buf = STRBUF_INIT;
> 		assert_empty_strbuf(&buf);
> 	}
>
> if we did so, but is that a good thing or a bad thing (e.g. it may
> make it harder to figure out where the real error came from, because
> of the "line number" thing will not easily capture the caller of the
> caller, perhaps)?  
>
>> +static void t_static_init(void)
>> +{
>> +	struct strbuf buf = STRBUF_INIT;
>> +
>> +	check_uint(buf.len, ==, 0);
>> +	check_uint(buf.alloc, ==, 0);
>> +	if (check(buf.buf == strbuf_slopbuf))
>> +		return; /* avoid de-referencing buf.buf */
>
> strbuf_slopbuf[0] is designed to be readable.  Do check() assertions
> return their parameter negated?
>
> In other words, if "we expect buf.buf to point at the slopbuf, but
> if that expectation does not hold, check() returns true and we
> refrain from doing check_char() on the next line because we cannot
> trust what buf.buf points at" is what is going on here, I find it
> very confusing.  Perhaps my intuition is failing me, but somehow I
> would have expected that passing check_foo() would return true while
> failing ones would return false.
>
> IOW I would expect
>
> 	if (check(buf.buf == strbuf_slopbuf))
> 		return;
>
> to work very similarly to
>
> 	if (buf.buf == strbuf_slopbuf)
> 		return;
>
> in expressing the control flow, simply because they are visually
> similar.  But of course, if we early-return because buf.buf that
> does not point at strbuf_slopbuf is a sign of trouble, then the
> control flow we want is
>
> 	if (buf.buf != strbuf_slopbuf)
> 		return;
>
> or
>
> 	if (!(buf.buf == strbuf_slopbuf))
> 		return;
>
> The latter is easier to translate to check_foo(), because what is
> inside the inner parentheses is the condition we expect, and we
> would like check_foo() to complain when the condition does not hold.
>
> For the "check_foo()" thing to work in a similar way, while having
> the side effect of reporting any failed expectations, we would want
> to write
>
> 	if (!check(buf.buf == strbuf_slopbuf))
> 		return;
>
> And for that similarity to work, check_foo() must return false when
> its expectation fails, and return true when its expectation holds.
>
> I think that is where my "I find it very confusing" comes from.
>
>> +	check_char(buf.buf[0], ==, '\0');
>> +}
>
>> +static void t_dynamic_init(void)
>> +{
>> +	struct strbuf buf;
>> +
>> +	strbuf_init(&buf, 1024);
>> +	check_uint(buf.len, ==, 0);
>> +	check_uint(buf.alloc, >=, 1024);
>> +	check_char(buf.buf[0], ==, '\0');
>
> Is it sensible to check buf.buf is not slopbuf at this point, or
> does it make the test TOO intimate with the current implementation
> detail?
>
>> +	strbuf_release(&buf);
>> +}
>> +
>> +static void t_addch(struct strbuf *buf, void *data)
>> +{
>> +	const char *p_ch = data;
>> +	const char ch = *p_ch;
>> +
>> +	strbuf_addch(buf, ch);
>> +	if (check_uint(buf->len, ==, 1) ||
>> +	    check_uint(buf->alloc, >, 1))
>> +		return; /* avoid de-referencing buf->buf */
>
> Again, I find the return values from these check_uint() calls highly
> confusing, if this is saying "if len is 1 and alloc is more than 1,
> then we are in an expected state and can further validate that buf[0]
> is ch and buf[1] is NULL, but otherwise we should punt".  The polarity
> looks screwy.  Perhaps it is just me?
>
>> +	check_char(buf->buf[0], ==, ch);
>> +	check_char(buf->buf[1], ==, '\0');
>> +}
>
> In any case, this t_addch() REQUIRES that incoming buf is empty,
> doesn't it?  I do not think it is sensible.  I would have expected
> that it would be more like
>
> 	t_addch(struct strbuf *buf, void *data)
> 	{
> 		char ch = *(char *)data;
> 		size_t orig_alloc = buf->alloc;
> 		size_t orig_len = buf->len;
>
> 		if (!assert_sane_strbuf(buf))
> 			return;
>                 strbuf_addch(buf, ch);
> 		if (!assert_sane_strbuf(buf))
> 			return;
> 		check_uint(buf->len, ==, orig_len + 1);
> 		check_uint(buf->alloc, >=, orig_alloc);
>                 check_char(buf->buf[buf->len - 1], ==, ch);
>                 check_char(buf->buf[buf->len], ==, '\0');
> 	}
>
> to ensure that we can add a ch to a strbuf with any existing
> contents and get a one-byte longer contents than before, with the
> last byte of the buffer becoming 'ch' and still NUL terminated.
>
> And we protect ourselves with a helper that checks if the given
> strbuf looks *sane*.
>
> 	static int assert_sane_strbuf(struct strbuf *buf)
> 	{
>         	/* can use slopbuf only when the length is 0 */
> 		if (buf->buf == strbuf_slopbuf)
>                 	return (buf->len == 0);
> 		/* everybody else must have non-NULL buffer */
> 		if (buf->buf == NULL)
> 			return 0;
>                 /* 
> 		 * alloc must be at least 1 byte larger than len
> 		 * for the terminating NUL at the end.
> 		 */
> 		return ((buf->len + 1 <= buf->alloc) &&
> 		    	(buf->buf[buf->len] == '\0'));
> 	}
>
> You can obviously use your check_foo() for the individual checks
> done in this function to get a more detailed diagnosis, but because
> I have confused myself enough by thinking about their polarity, I
> wrote this in barebones comparison instead.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-09-22 20:05         ` Junio C Hamano
@ 2023-09-24 13:57           ` phillip.wood123
  2023-09-25 18:57             ` Junio C Hamano
  2023-10-06 22:58             ` Josh Steadmon
  0 siblings, 2 replies; 67+ messages in thread
From: phillip.wood123 @ 2023-09-24 13:57 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Josh Steadmon, git, linusa, calvinwan, rsbecker

Hi Junio

On 22/09/2023 21:05, Junio C Hamano wrote:
> It seems this got stuck during Josh's absense and I didn't ping it
> further, but I should have noticed that you are the author of this
> patch, and pinged you in the meantime.

Sorry I meant to reply when I saw your first message but then didn't get 
round to it.

> Any thought on the "polarity" of the return values from the
> assertion?  I still find it confusing and hard to follow.

When I was writing this I was torn between whether to follow our usual 
convention of returning zero for success and minus one for failure or to 
return one for success and zero for failure. In the end I decided to go 
with the former but I tend to agree with you that the latter would be 
easier to understand.

>>> +test_expect_success 'TAP output from unit tests' '
>>> [...]
>>> +	ok 19 - test with no checks returns -1
>>> +	1..19
>>> +	EOF
>>
>> Presumably t-basic will serve as a catalog of check_* functions and
>> the test binary, together with this test piece, will keep growing as
>> we gain features in the unit tests infrastructure.  I wonder how
>> maintainable the above is, though.  When we acquire new test, we
>> would need to renumber.  What if multiple developers add new
>> features to the catalog at the same time?

I think we could just add new tests to the end so we'd only need to 
change the "1..19" line. That will become a source of merge conflicts if 
multiple developers add new features at the same time though. Having 
several unit test programs called from separate tests in t0080 might 
help with that.

>>> diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
>>> new file mode 100644
>>> index 0000000000..e292d58348
>>> --- /dev/null
>>> +++ b/t/unit-tests/.gitignore
>>> @@ -0,0 +1,2 @@
>>> +/t-basic
>>> +/t-strbuf
>>
>> Also, can we come up with some naming convention so that we do not
>> have to keep adding to this file every time we add a new test
>> script?

Perhaps we should put the unit test binaries in a separate directory so 
we can just add that directory to .gitignore.

Best Wishes

Phillip

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-09-24 13:57           ` phillip.wood123
@ 2023-09-25 18:57             ` Junio C Hamano
  2023-10-06 22:58             ` Josh Steadmon
  1 sibling, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-09-25 18:57 UTC (permalink / raw)
  To: phillip.wood123; +Cc: Josh Steadmon, git, linusa, calvinwan, rsbecker

phillip.wood123@gmail.com writes:

> When I was writing this I was torn between whether to follow our usual
> convention of returning zero for success and minus one for failure or
> to return one for success and zero for failure. In the end I decided
> to go with the former but I tend to agree with you that the latter
> would be easier to understand.

An understandable contention.

>>>> @@ -0,0 +1,2 @@
>>>> +/t-basic
>>>> +/t-strbuf
>>>
>>> Also, can we come up with some naming convention so that we do not
>>> have to keep adding to this file every time we add a new test
>>> script?
>
> Perhaps we should put the unit test binaries in a separate directory
> so we can just add that directory to .gitignore.

Yeah, if we can do that, that would help organizing these tests.

Thanks for working on this.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-09-24 13:57           ` phillip.wood123
  2023-09-25 18:57             ` Junio C Hamano
@ 2023-10-06 22:58             ` Josh Steadmon
  1 sibling, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-10-06 22:58 UTC (permalink / raw)
  To: phillip.wood; +Cc: Junio C Hamano, git, linusa, calvinwan, rsbecker

On 2023.09.24 14:57, phillip.wood123@gmail.com wrote:
> On 22/09/2023 21:05, Junio C Hamano wrote:
> > Any thought on the "polarity" of the return values from the
> > assertion?  I still find it confusing and hard to follow.
> 
> When I was writing this I was torn between whether to follow our usual
> convention of returning zero for success and minus one for failure or to
> return one for success and zero for failure. In the end I decided to go with
> the former but I tend to agree with you that the latter would be easier to
> understand.

Agreed. V8 will switch to 0 for failure and 1 for success for the TEST,
TEST_TODO, and check macros.


> > > > +test_expect_success 'TAP output from unit tests' '
> > > > [...]
> > > > +	ok 19 - test with no checks returns -1
> > > > +	1..19
> > > > +	EOF
> > > 
> > > Presumably t-basic will serve as a catalog of check_* functions and
> > > the test binary, together with this test piece, will keep growing as
> > > we gain features in the unit tests infrastructure.  I wonder how
> > > maintainable the above is, though.  When we acquire new test, we
> > > would need to renumber.  What if multiple developers add new
> > > features to the catalog at the same time?
> 
> I think we could just add new tests to the end so we'd only need to change
> the "1..19" line. That will become a source of merge conflicts if multiple
> developers add new features at the same time though. Having several unit
> test programs called from separate tests in t0080 might help with that.

My hope is that test-lib.c will not have to grow too extensively after
this series; that said, it's already been a pain to have to adjust the
t0080 expected text several times just during development of this
series. I'll look into splitting this into several "meta-tests", but I'm
not sure I'll get to it for V8 yet.


> > > > diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
> > > > new file mode 100644
> > > > index 0000000000..e292d58348
> > > > --- /dev/null
> > > > +++ b/t/unit-tests/.gitignore
> > > > @@ -0,0 +1,2 @@
> > > > +/t-basic
> > > > +/t-strbuf
> > > 
> > > Also, can we come up with some naming convention so that we do not
> > > have to keep adding to this file every time we add a new test
> > > script?
> 
> Perhaps we should put the unit test binaries in a separate directory so we
> can just add that directory to .gitignore.

Sounds good to me.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v7 2/3] unit tests: add TAP unit test framework
  2023-08-18  0:12       ` Junio C Hamano
  2023-09-22 20:05         ` Junio C Hamano
@ 2023-10-09 17:37         ` Josh Steadmon
  1 sibling, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-10-09 17:37 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: phillip.wood123, git, linusa, calvinwan, rsbecker

On 2023.08.17 17:12, Junio C Hamano wrote:
> 
> What I am going to utter from here on are not complaints but purely
> meant as questions.  
> 
> Would the resulting output and maintainability of the tests change
> (improve, or worsen) if we introduce
> 
> 	static void assert_empty_strbuf(struct strbuf *buf)
> 	{
> 		check_uint(buf->len, ==, 0);
>                 check_uint(buf->alloc, ==, 0);
> 		check(buf.buf == strbuf_slopbuf);
> 		check_char(buf.buf[0], ==, '\0');
> 	}
> 
> and call it from the setup() function to ensure that
> strbuf_release(&buf) it calls after running customer test f() brings
> the buffer in a reasonably initialized state?  The t_static_init()
> test should be able to say
> 
> 	static void t_static_init(void)
> 	{
> 		struct strbuf buf = STRBUF_INIT;
> 		assert_empty_strbuf(&buf);
> 	}
> 
> if we did so, but is that a good thing or a bad thing (e.g. it may
> make it harder to figure out where the real error came from, because
> of the "line number" thing will not easily capture the caller of the
> caller, perhaps)?  

I am unsure whether or not this is an improvement. While it would
certainly help readability and reduce duplication if this were
production code, in test code it can often be more valuable to be
verbose and explicit, so that individual broken test cases can be
quickly understood without having to do a lot of cross referencing.

I'll hold off on adding any more utility functions in t-strbuf for V8,
but if you or other folks feel strongly about it we can address it in
V9.


> > +	check_char(buf.buf[0], ==, '\0');
> > +}
> 
> > +static void t_dynamic_init(void)
> > +{
> > +	struct strbuf buf;
> > +
> > +	strbuf_init(&buf, 1024);
> > +	check_uint(buf.len, ==, 0);
> > +	check_uint(buf.alloc, >=, 1024);
> > +	check_char(buf.buf[0], ==, '\0');
> 
> Is it sensible to check buf.buf is not slopbuf at this point, or
> does it make the test TOO intimate with the current implementation
> detail?

Yes, I think this is too much of an internal detail. None of the users
of strbuf ever reference it directly. Presumably for library-ish code,
we should stick to testing just the user-observable parts, not the
implementation.


> > +	check_char(buf->buf[0], ==, ch);
> > +	check_char(buf->buf[1], ==, '\0');
> > +}
> 
> In any case, this t_addch() REQUIRES that incoming buf is empty,
> doesn't it?  I do not think it is sensible.  I would have expected
> that it would be more like
> 
> 	t_addch(struct strbuf *buf, void *data)
> 	{
> 		char ch = *(char *)data;
> 		size_t orig_alloc = buf->alloc;
> 		size_t orig_len = buf->len;
> 
> 		if (!assert_sane_strbuf(buf))
> 			return;
>                 strbuf_addch(buf, ch);
> 		if (!assert_sane_strbuf(buf))
> 			return;
> 		check_uint(buf->len, ==, orig_len + 1);
> 		check_uint(buf->alloc, >=, orig_alloc);
>                 check_char(buf->buf[buf->len - 1], ==, ch);
>                 check_char(buf->buf[buf->len], ==, '\0');
> 	}
> 
> to ensure that we can add a ch to a strbuf with any existing
> contents and get a one-byte longer contents than before, with the
> last byte of the buffer becoming 'ch' and still NUL terminated.
> 
> And we protect ourselves with a helper that checks if the given
> strbuf looks *sane*.

Yeah, in general I think this is a good improvement, but again I'm not
sure if it's worth adding additional helpers. I'll try to rework this a
bit in V8.


> 	static int assert_sane_strbuf(struct strbuf *buf)
> 	{
>         	/* can use slopbuf only when the length is 0 */
> 		if (buf->buf == strbuf_slopbuf)
>                 	return (buf->len == 0);
> 		/* everybody else must have non-NULL buffer */
> 		if (buf->buf == NULL)
> 			return 0;
>                 /* 
> 		 * alloc must be at least 1 byte larger than len
> 		 * for the terminating NUL at the end.
> 		 */
> 		return ((buf->len + 1 <= buf->alloc) &&
> 		    	(buf->buf[buf->len] == '\0'));
> 	}
> 
> You can obviously use your check_foo() for the individual checks
> done in this function to get a more detailed diagnosis, but because
> I have confused myself enough by thinking about their polarity, I
> wrote this in barebones comparison instead.
> 

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v8 0/3] Add unit test framework and project plan
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
                     ` (4 preceding siblings ...)
  2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
@ 2023-10-09 22:21   ` Josh Steadmon
  2023-10-09 22:21     ` [PATCH v8 1/3] unit tests: Add a project plan document Josh Steadmon
                       ` (5 more replies)
  2023-11-01 23:31   ` [PATCH v9 " Josh Steadmon
  2023-11-09 18:50   ` [PATCH v10 0/3] Add unit test framework and project plan Josh Steadmon
  7 siblings, 6 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-10-09 22:21 UTC (permalink / raw)
  To: git; +Cc: phillip.wood123, linusa, calvinwan, gitster, rsbecker

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This series begins with a project document covering our goals for adding
unit tests and a discussion of alternative frameworks considered, as
well as the features used to evaluate them. A rendered preview of this
doc can be found at [2]. It also adds Phillip Wood's TAP implemenation
(with some slightly re-worked Makefile rules) and a sample strbuf unit
test. Finally, we modify the configs for GitHub and Cirrus CI to run the
unit tests. Sample runs showing successful CI runs can be found at [3],
[4], and [5].

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc
[3] https://github.com/steadmon/git/actions/runs/5884659246/job/15959781385#step:4:1803
[4] https://github.com/steadmon/git/actions/runs/5884659246/job/15959938401#step:5:186
[5] https://cirrus-ci.com/task/6126304366428160 (unrelated tests failed,
    but note that t-strbuf ran successfully)

In addition to reviewing the patches in this series, reviewers can help
this series progress by chiming in on these remaining TODOs:
- Figure out if we should split t-basic.c into multiple meta-tests, to
  avoid merge conflicts and changes to expected text in
  t0080-unit-test-output.sh.
- Figure out if we should de-duplicate assertions in t-strbuf.c at the
  cost of making tests less self-contained and diagnostic output less
  helpful.
- Figure out if we should collect unit tests statistics similar to the
  "counts" files for shell tests
- Decide if it's OK to wait on sharding unit tests across "sliced" CI
  instances
- Provide guidelines for writing new unit tests

Changes in v8:
- Flipped return values for TEST, TEST_TODO, and check_* macros &
  functions. This makes it easier to reason about control flow for
  patterns like:
    if (check(some_condition)) { ... }
- Moved unit test binaries to t/unit-tests/bin to simplify .gitignore
  patterns.
- Removed testing of some strbuf implementation details in t-strbuf.c


Josh Steadmon (2):
  unit tests: Add a project plan document
  ci: run unit tests in CI

Phillip Wood (1):
  unit tests: add TAP unit test framework

 .cirrus.yml                            |   2 +-
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 220 +++++++++++++++++
 Makefile                               |  28 ++-
 ci/run-build-and-tests.sh              |   2 +
 ci/run-test-slice.sh                   |   5 +
 t/Makefile                             |  15 +-
 t/t0080-unit-test-output.sh            |  58 +++++
 t/unit-tests/.gitignore                |   1 +
 t/unit-tests/t-basic.c                 |  95 +++++++
 t/unit-tests/t-strbuf.c                | 120 +++++++++
 t/unit-tests/test-lib.c                | 329 +++++++++++++++++++++++++
 t/unit-tests/test-lib.h                | 143 +++++++++++
 13 files changed, 1014 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/technical/unit-tests.txt
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

Range-diff against v7:
-:  ---------- > 1:  81c5148a12 unit tests: Add a project plan document
1:  3cc98d4045 ! 2:  00d3c95a81 unit tests: add TAP unit test framework
    @@ Commit message
     
                      check_uint(buf.len, ==, 0);
                      check_uint(buf.alloc, ==, 0);
    -                 if (check(buf.buf == strbuf_slopbuf))
    -                        return; /* avoid SIGSEV */
                      check_char(buf.buf[0], ==, '\0');
              }
     
    @@ Commit message
         checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
         unit tests for strbuf.c
     
    -    The unit tests can be built with "make unit-tests" (this works but the
    -    Makefile changes need some further work). Once they have been built they
    -    can be run manually (e.g t/unit-tests/t-strbuf) or with prove.
    +    The unit tests will be built as part of the default "make all" target,
    +    to avoid bitrot. If you wish to build just the unit tests, you can run
    +    "make build-unit-tests". To run the tests, you can use "make unit-tests"
    +    or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf".
     
         Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
    @@ Makefile: TEST_BUILTINS_OBJS =
      THIRD_PARTY_SOURCES =
     +UNIT_TEST_PROGRAMS =
     +UNIT_TEST_DIR = t/unit-tests
    ++UNIT_TEST_BIN = $(UNIT_TEST_DIR)/bin
      
      # Having this variable in your environment would break pipelines because
      # you cause "cd" to echo its destination to stdout.  It can also take
    @@ Makefile: THIRD_PARTY_SOURCES += compat/regex/%
      
     +UNIT_TEST_PROGRAMS += t-basic
     +UNIT_TEST_PROGRAMS += t-strbuf
    -+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_DIR)/%$X,$(UNIT_TEST_PROGRAMS))
    ++UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_BIN)/%$X,$(UNIT_TEST_PROGRAMS))
     +UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
     +UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
     +
    @@ Makefile: $(FUZZ_PROGRAMS): all
      
      fuzz-all: $(FUZZ_PROGRAMS)
     +
    -+$(UNIT_TEST_PROGS): $(UNIT_TEST_DIR)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS
    ++$(UNIT_TEST_BIN):
    ++	@mkdir -p $(UNIT_TEST_BIN)
    ++
    ++$(UNIT_TEST_PROGS): $(UNIT_TEST_BIN)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS $(UNIT_TEST_BIN)
     +	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
     +		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
     +
    @@ t/Makefile: TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
      TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
      CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
      CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
    -+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
    ++UNIT_TESTS = $(sort $(filter-out unit-tests/bin/t-basic%,$(wildcard unit-tests/bin/t-*)))
      
      # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
      # checks all tests in all scripts via a single invocation, so tell individual
    @@ t/t0080-unit-test-output.sh (new)
     +test_expect_success 'TAP output from unit tests' '
     +	cat >expect <<-EOF &&
     +	ok 1 - passing test
    -+	ok 2 - passing test and assertion return 0
    ++	ok 2 - passing test and assertion return 1
     +	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
     +	#    left: 1
     +	#   right: 2
     +	not ok 3 - failing test
    -+	ok 4 - failing test and assertion return -1
    ++	ok 4 - failing test and assertion return 0
     +	not ok 5 - passing TEST_TODO() # TODO
    -+	ok 6 - passing TEST_TODO() returns 0
    ++	ok 6 - passing TEST_TODO() returns 1
     +	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
     +	not ok 7 - failing TEST_TODO()
    -+	ok 8 - failing TEST_TODO() returns -1
    ++	ok 8 - failing TEST_TODO() returns 0
     +	# check "0" failed at t/unit-tests/t-basic.c:30
     +	# skipping test - missing prerequisite
     +	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
     +	ok 9 - test_skip() # SKIP
    -+	ok 10 - skipped test returns 0
    ++	ok 10 - skipped test returns 1
     +	# skipping test - missing prerequisite
     +	ok 11 - test_skip() inside TEST_TODO() # SKIP
    -+	ok 12 - test_skip() inside TEST_TODO() returns 0
    ++	ok 12 - test_skip() inside TEST_TODO() returns 1
     +	# check "0" failed at t/unit-tests/t-basic.c:48
     +	not ok 13 - TEST_TODO() after failing check
    -+	ok 14 - TEST_TODO() after failing check returns -1
    ++	ok 14 - TEST_TODO() after failing check returns 0
     +	# check "0" failed at t/unit-tests/t-basic.c:56
     +	not ok 15 - failing check after TEST_TODO()
    -+	ok 16 - failing check after TEST_TODO() returns -1
    ++	ok 16 - failing check after TEST_TODO() returns 0
     +	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
     +	#    left: "\011hello\\\\"
     +	#   right: "there\"\012"
    @@ t/t0080-unit-test-output.sh (new)
     +	not ok 17 - messages from failing string and char comparison
     +	# BUG: test has no checks at t/unit-tests/t-basic.c:91
     +	not ok 18 - test with no checks
    -+	ok 19 - test with no checks returns -1
    ++	ok 19 - test with no checks returns 0
     +	1..19
     +	EOF
     +
    -+	! "$GIT_BUILD_DIR"/t/unit-tests/t-basic >actual &&
    ++	! "$GIT_BUILD_DIR"/t/unit-tests/bin/t-basic >actual &&
     +	test_cmp expect actual
     +'
     +
    @@ t/t0080-unit-test-output.sh (new)
     
      ## t/unit-tests/.gitignore (new) ##
     @@
    -+/t-basic
    -+/t-strbuf
    ++/bin
     
      ## t/unit-tests/t-basic.c (new) ##
     @@
    @@ t/unit-tests/t-basic.c (new)
     +static int do_skip(void)
     +{
     +	test_skip("missing prerequisite");
    -+	return 0;
    ++	return 1;
     +}
     +
     +static void t_skip_todo(void)
    @@ t/unit-tests/t-basic.c (new)
     +int cmd_main(int argc, const char **argv)
     +{
     +	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
    -+	TEST(t_res(0), "passing test and assertion return 0");
    ++	TEST(t_res(1), "passing test and assertion return 1");
     +	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
    -+	TEST(t_res(-1), "failing test and assertion return -1");
    ++	TEST(t_res(0), "failing test and assertion return 0");
     +	test_res = TEST(t_todo(0), "passing TEST_TODO()");
    -+	TEST(t_res(0), "passing TEST_TODO() returns 0");
    ++	TEST(t_res(1), "passing TEST_TODO() returns 1");
     +	test_res = TEST(t_todo(1), "failing TEST_TODO()");
    -+	TEST(t_res(-1), "failing TEST_TODO() returns -1");
    ++	TEST(t_res(0), "failing TEST_TODO() returns 0");
     +	test_res = TEST(t_skip(), "test_skip()");
    -+	TEST(check_int(test_res, ==, 0), "skipped test returns 0");
    ++	TEST(check_int(test_res, ==, 1), "skipped test returns 1");
     +	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
    -+	TEST(t_res(0), "test_skip() inside TEST_TODO() returns 0");
    ++	TEST(t_res(1), "test_skip() inside TEST_TODO() returns 1");
     +	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
    -+	TEST(check_int(test_res, ==, -1), "TEST_TODO() after failing check returns -1");
    ++	TEST(check_int(test_res, ==, 0), "TEST_TODO() after failing check returns 0");
     +	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
    -+	TEST(check_int(test_res, ==, -1), "failing check after TEST_TODO() returns -1");
    ++	TEST(check_int(test_res, ==, 0), "failing check after TEST_TODO() returns 0");
     +	TEST(t_messages(), "messages from failing string and char comparison");
     +	test_res = TEST(t_empty(), "test with no checks");
    -+	TEST(check_int(test_res, ==, -1), "test with no checks returns -1");
    ++	TEST(check_int(test_res, ==, 0), "test with no checks returns 0");
     +
     +	return test_done();
     +}
    @@ t/unit-tests/t-strbuf.c (new)
     +#include "test-lib.h"
     +#include "strbuf.h"
     +
    -+/* wrapper that supplies tests with an initialized strbuf */
    ++/* wrapper that supplies tests with an empty, initialized strbuf */
     +static void setup(void (*f)(struct strbuf*, void*), void *data)
     +{
     +	struct strbuf buf = STRBUF_INIT;
    @@ t/unit-tests/t-strbuf.c (new)
     +	strbuf_release(&buf);
     +	check_uint(buf.len, ==, 0);
     +	check_uint(buf.alloc, ==, 0);
    -+	check(buf.buf == strbuf_slopbuf);
    -+	check_char(buf.buf[0], ==, '\0');
    ++}
    ++
    ++/* wrapper that supplies tests with a populated, initialized strbuf */
    ++static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, void *data)
    ++{
    ++	struct strbuf buf = STRBUF_INIT;
    ++
    ++	strbuf_addstr(&buf, init_str);
    ++	check_uint(buf.len, ==, strlen(init_str));
    ++	f(&buf, data);
    ++	strbuf_release(&buf);
    ++	check_uint(buf.len, ==, 0);
    ++	check_uint(buf.alloc, ==, 0);
    ++}
    ++
    ++static int assert_sane_strbuf(struct strbuf *buf)
    ++{
    ++	/* Initialized strbufs should always have a non-NULL buffer */
    ++	if (buf->buf == NULL)
    ++		return 0;
    ++	/* Buffers should always be NUL-terminated */
    ++	if (buf->buf[buf->len] != '\0')
    ++		return 0;
    ++	/*
    ++	 * Freshly-initialized strbufs may not have a dynamically allocated
    ++	 * buffer
    ++	 */
    ++	if (buf->len == 0 && buf->alloc == 0)
    ++		return 1;
    ++	/* alloc must be at least one byte larger than len */
    ++	return buf->len + 1 <= buf->alloc;
     +}
     +
     +static void t_static_init(void)
    @@ t/unit-tests/t-strbuf.c (new)
     +
     +	check_uint(buf.len, ==, 0);
     +	check_uint(buf.alloc, ==, 0);
    -+	if (check(buf.buf == strbuf_slopbuf))
    -+		return; /* avoid de-referencing buf.buf */
     +	check_char(buf.buf[0], ==, '\0');
     +}
     +
    @@ t/unit-tests/t-strbuf.c (new)
     +	struct strbuf buf;
     +
     +	strbuf_init(&buf, 1024);
    ++	check(assert_sane_strbuf(&buf));
     +	check_uint(buf.len, ==, 0);
     +	check_uint(buf.alloc, >=, 1024);
     +	check_char(buf.buf[0], ==, '\0');
    @@ t/unit-tests/t-strbuf.c (new)
     +{
     +	const char *p_ch = data;
     +	const char ch = *p_ch;
    ++	size_t orig_alloc = buf->alloc;
    ++	size_t orig_len = buf->len;
     +
    ++	if (!check(assert_sane_strbuf(buf)))
    ++		return;
     +	strbuf_addch(buf, ch);
    -+	if (check_uint(buf->len, ==, 1) ||
    -+	    check_uint(buf->alloc, >, 1))
    ++	if (!check(assert_sane_strbuf(buf)))
    ++		return;
    ++	if (!(check_uint(buf->len, ==, orig_len + 1) &&
    ++	      check_uint(buf->alloc, >=, orig_alloc)))
     +		return; /* avoid de-referencing buf->buf */
    -+	check_char(buf->buf[0], ==, ch);
    -+	check_char(buf->buf[1], ==, '\0');
    ++	check_char(buf->buf[buf->len - 1], ==, ch);
    ++	check_char(buf->buf[buf->len], ==, '\0');
     +}
     +
     +static void t_addstr(struct strbuf *buf, void *data)
     +{
     +	const char *text = data;
     +	size_t len = strlen(text);
    ++	size_t orig_alloc = buf->alloc;
    ++	size_t orig_len = buf->len;
     +
    ++	if (!check(assert_sane_strbuf(buf)))
    ++		return;
     +	strbuf_addstr(buf, text);
    -+	if (check_uint(buf->len, ==, len) ||
    -+	    check_uint(buf->alloc, >, len) ||
    -+	    check_char(buf->buf[len], ==, '\0'))
    ++	if (!check(assert_sane_strbuf(buf)))
    ++		return;
    ++	if (!(check_uint(buf->len, ==, orig_len + len) &&
    ++	      check_uint(buf->alloc, >=, orig_alloc) &&
    ++	      check_uint(buf->alloc, >, orig_len + len) &&
    ++	      check_char(buf->buf[orig_len + len], ==, '\0')))
     +	    return;
    -+	check_str(buf->buf, text);
    ++	check_str(buf->buf + orig_len, text);
     +}
     +
     +int cmd_main(int argc, const char **argv)
     +{
    -+	if (TEST(t_static_init(), "static initialization works"))
    ++	if (!TEST(t_static_init(), "static initialization works"))
     +		test_skip_all("STRBUF_INIT is broken");
     +	TEST(t_dynamic_init(), "dynamic initialization works");
     +	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
     +	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
    ++	TEST(setup_populated(t_addch, "initial value", "a"),
    ++	     "strbuf_addch appends to initial value");
     +	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
    ++	TEST(setup_populated(t_addstr, "initial value", "hello there"),
    ++	     "strbuf_addstr appends string to initial value");
     +
     +	return test_done();
     +}
    @@ t/unit-tests/test-lib.c (new)
     +	va_end(ap);
     +	ctx.running = 0;
     +	if (ctx.skip_all)
    -+		return 0;
    ++		return 1;
     +	putc('\n', stdout);
     +	fflush(stdout);
     +	ctx.failed |= ctx.result == RESULT_FAILURE;
     +
    -+	return -(ctx.result == RESULT_FAILURE);
    ++	return ctx.result != RESULT_FAILURE;
     +}
     +
     +static void test_fail(void)
    @@ t/unit-tests/test-lib.c (new)
     +
     +	if (ctx.result == RESULT_SKIP) {
     +		test_msg("skipping check '%s' at %s", check, location);
    -+		return 0;
    ++		return 1;
     +	} else if (!ctx.todo) {
     +		if (ok) {
     +			test_pass();
    @@ t/unit-tests/test-lib.c (new)
     +		}
     +	}
     +
    -+	return -!ok;
    ++	return !!ok;
     +}
     +
     +void test__todo_begin(void)
    @@ t/unit-tests/test-lib.c (new)
     +
     +	ctx.todo = 0;
     +	if (ctx.result == RESULT_SKIP)
    -+		return 0;
    -+	if (!res) {
    ++		return 1;
    ++	if (res) {
     +		test_msg("todo check '%s' succeeded at %s", check, location);
     +		test_fail();
     +	} else {
     +		test_todo();
     +	}
     +
    -+	return -!res;
    ++	return !res;
     +}
     +
     +int check_bool_loc(const char *loc, const char *check, int ok)
    @@ t/unit-tests/test-lib.c (new)
     +{
     +	int ret = test_assert(loc, check, ok);
     +
    -+	if (ret) {
    ++	if (!ret) {
     +		test_msg("   left: %"PRIdMAX, a);
     +		test_msg("  right: %"PRIdMAX, b);
     +	}
    @@ t/unit-tests/test-lib.c (new)
     +{
     +	int ret = test_assert(loc, check, ok);
     +
    -+	if (ret) {
    ++	if (!ret) {
     +		test_msg("   left: %"PRIuMAX, a);
     +		test_msg("  right: %"PRIuMAX, b);
     +	}
    @@ t/unit-tests/test-lib.c (new)
     +{
     +	int ret = test_assert(loc, check, ok);
     +
    -+	if (ret) {
    ++	if (!ret) {
     +		fflush(stderr);
     +		print_char("   left", a);
     +		print_char("  right", b);
    @@ t/unit-tests/test-lib.c (new)
     +	int ok = (!a && !b) || (a && b && !strcmp(a, b));
     +	int ret = test_assert(loc, check, ok);
     +
    -+	if (ret) {
    ++	if (!ret) {
     +		fflush(stderr);
     +		print_str("   left", a);
     +		print_str("  right", b);
    @@ t/unit-tests/test-lib.h (new)
     +#include "git-compat-util.h"
     +
     +/*
    -+ * Run a test function, returns 0 if the test succeeds, -1 if it
    ++ * Run a test function, returns 1 if the test succeeds, 0 if it
     + * fails. If test_skip_all() has been called then the test will not be
     + * run. The description for each test should be unique. For example:
     + *
    @@ t/unit-tests/test-lib.h (new)
     +void test_msg(const char *format, ...);
     +
     +/*
    -+ * Test checks are built around test_assert(). checks return 0 on
    -+ * success, -1 on failure. If any check fails then the test will
    ++ * Test checks are built around test_assert(). checks return 1 on
    ++ * success, 0 on failure. If any check fails then the test will
     + * fail. To create a custom check define a function that wraps
     + * test_assert() and a macro to wrap that function. For example:
     + *
    @@ t/unit-tests/test-lib.h (new)
     +
     +/*
     + * Wrap a check that is known to fail. If the check succeeds then the
    -+ * test will fail. Returns 0 if the check fails, -1 if it
    ++ * test will fail. Returns 1 if the check fails, 0 if it
     + * succeeds. For example:
     + *
     + *  TEST_TODO(check(0));
2:  abf4dc41ac ! 3:  aa1dfa4892 ci: run unit tests in CI
    @@ ci/run-test-slice.sh: group "Run tests" make --quiet -C t T="$(cd t &&
     +fi
     +
      check_unignored_build_artifacts
    -
    - ## t/Makefile ##
    -@@ t/Makefile: TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
    - TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
    - CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
    - CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
    --UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/*)))
    -+UNIT_TESTS = $(sort $(filter-out %.h %.c %.o unit-tests/t-basic%,$(wildcard unit-tests/t-*)))
    - 
    - # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
    - # checks all tests in all scripts via a single invocation, so tell individual

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
@ 2023-10-09 22:21     ` Josh Steadmon
  2023-10-10  8:57       ` Oswald Buddenhagen
  2023-10-27 20:12       ` Christian Couder
  2023-10-09 22:21     ` [PATCH v8 2/3] unit tests: add TAP unit test framework Josh Steadmon
                       ` (4 subsequent siblings)
  5 siblings, 2 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-10-09 22:21 UTC (permalink / raw)
  To: git; +Cc: phillip.wood123, linusa, calvinwan, gitster, rsbecker

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
preliminary comparison of several different frameworks.

Co-authored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 220 +++++++++++++++++++++++++
 2 files changed, 221 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..b7a89cc838
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,220 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+For now, we will evaluate projects solely on their framework features. Since we
+are relying on having TAP output (see below), we can assume that any framework
+can be made to work with a harness that we can choose later.
+
+
+== Choosing a framework
+
+We believe the best option is to implement a custom TAP framework for the Git
+project. We use a version of the framework originally proposed in
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
+
+
+== Choosing a test harness
+
+During upstream discussion, it was occasionally noted that `prove` provides many
+convenient features, such as scheduling slower tests first, or re-running
+previously failed tests.
+
+While we already support the use of `prove` as a test harness for the shell
+tests, it is not strictly required. The t/Makefile allows running shell tests
+directly (though with interleaved output if parallelism is enabled). Git
+developers who wish to use `prove` as a more advanced harness can do so by
+setting DEFAULT_TEST_TARGET=prove in their config.mak.
+
+We will follow a similar approach for unit tests: by default the test
+executables will be run directly from the t/Makefile, but `prove` can be
+configured with DEFAULT_UNIT_TEST_TARGET=prove.
+
+
+== Framework selection
+
+There are a variety of features we can use to rank the candidate frameworks, and
+those features have different priorities:
+
+* Critical features: we probably won't consider a framework without these
+** Can we legally / easily use the project?
+*** <<license,License>>
+*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
+*** <<maintainable-extensible,Maintainable / extensible>>
+*** <<major-platform-support,Major platform support>>
+** Does the project support our bare-minimum needs?
+*** <<tap-support,TAP support>>
+*** <<diagnostic-output,Diagnostic output>>
+*** <<runtime-skippable-tests,Runtime-skippable tests>>
+* Nice-to-have features:
+** <<parallel-execution,Parallel execution>>
+** <<mock-support,Mock support>>
+** <<signal-error-handling,Signal & error-handling>>
+* Tie-breaker stats
+** <<project-kloc,Project KLOC>>
+** <<adoption,Adoption>>
+
+[[license]]
+=== License
+
+We must be able to legally use the framework in connection with Git. As Git is
+licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
+projects.
+
+[[vendorable-or-ubiquitous]]
+=== Vendorable or ubiquitous
+
+We want to avoid forcing Git developers to install new tools just to run unit
+tests. Any prospective frameworks and harnesses must either be vendorable
+(meaning, we can copy their source directly into Git's repository), or so
+ubiquitous that it is reasonable to expect that most developers will have the
+tools installed already.
+
+[[maintainable-extensible]]
+=== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+=== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[tap-support]]
+=== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+=== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[runtime-skippable-tests]]
+=== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[parallel-execution]]
+=== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. We assume that executable-level
+parallelism can be handled by the test harness.
+
+[[mock-support]]
+=== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+=== Signal & error handling
+
+The test framework should fail gracefully when test cases are themselves buggy
+or when they are interrupted by signals during runtime.
+
+[[project-kloc]]
+=== Project KLOC
+
+The size of the project, in thousands of lines of code as measured by
+https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
+1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+=== Adoption
+
+As a tie-breaker, we prefer a more widely-used project. We use the number of
+GitHub / GitLab stars to estimate this.
+
+
+=== Comparison
+
+[format="csv",options="header",width="33%"]
+|=====
+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
+https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
+https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
+|=====
+
+=== Additional framework candidates
+
+Several suggested frameworks have been eliminated from consideration:
+
+* Incompatible licenses:
+** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
+** https://cmocka.org/[cmocka] (Apache 2.0)
+* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
+* No TAP support:
+** https://nemequ.github.io/munit/[µnit]
+** https://github.com/google/cmockery[cmockery]
+** https://github.com/lpabon/cmockery2[cmockery2]
+** https://github.com/ThrowTheSwitch/Unity[Unity]
+** https://github.com/siu/minunit[minunit]
+** https://cunit.sourceforge.net/[CUnit]
+
+
+== Milestones
+
+* Add useful tests of library-like code
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v8 2/3] unit tests: add TAP unit test framework
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
  2023-10-09 22:21     ` [PATCH v8 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-10-09 22:21     ` Josh Steadmon
  2023-10-11 21:42       ` Junio C Hamano
                         ` (2 more replies)
  2023-10-09 22:21     ` [PATCH v8 3/3] ci: run unit tests in CI Josh Steadmon
                       ` (3 subsequent siblings)
  5 siblings, 3 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-10-09 22:21 UTC (permalink / raw)
  To: git; +Cc: phillip.wood123, linusa, calvinwan, gitster, rsbecker

From: Phillip Wood <phillip.wood@dunelm.org.uk>

This patch contains an implementation for writing unit tests with TAP
output. Each test is a function that contains one or more checks. The
test is run with the TEST() macro and if any of the checks fail then the
test will fail. A complete program that tests STRBUF_INIT would look
like

     #include "test-lib.h"
     #include "strbuf.h"

     static void t_static_init(void)
     {
             struct strbuf buf = STRBUF_INIT;

             check_uint(buf.len, ==, 0);
             check_uint(buf.alloc, ==, 0);
             check_char(buf.buf[0], ==, '\0');
     }

     int main(void)
     {
             TEST(t_static_init(), "static initialization works);

             return test_done();
     }

The output of this program would be

     ok 1 - static initialization works
     1..1

If any of the checks in a test fail then they print a diagnostic message
to aid debugging and the test will be reported as failing. For example a
failing integer check would look like

     # check "x >= 3" failed at my-test.c:102
     #    left: 2
     #   right: 3
     not ok 1 - x is greater than or equal to three

There are a number of check functions implemented so far. check() checks
a boolean condition, check_int(), check_uint() and check_char() take two
values to compare and a comparison operator. check_str() will check if
two strings are equal. Custom checks are simple to implement as shown in
the comments above test_assert() in test-lib.h.

Tests can be skipped with test_skip() which can be supplied with a
reason for skipping which it will print. Tests can print diagnostic
messages with test_msg().  Checks that are known to fail can be wrapped
in TEST_TODO().

There are a couple of example test programs included in this
patch. t-basic.c implements some self-tests and demonstrates the
diagnostic output for failing test. The output of this program is
checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
unit tests for strbuf.c

The unit tests will be built as part of the default "make all" target,
to avoid bitrot. If you wish to build just the unit tests, you can run
"make build-unit-tests". To run the tests, you can use "make unit-tests"
or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf".

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Makefile                    |  28 ++-
 t/Makefile                  |  15 +-
 t/t0080-unit-test-output.sh |  58 +++++++
 t/unit-tests/.gitignore     |   1 +
 t/unit-tests/t-basic.c      |  95 +++++++++++
 t/unit-tests/t-strbuf.c     | 120 +++++++++++++
 t/unit-tests/test-lib.c     | 329 ++++++++++++++++++++++++++++++++++++
 t/unit-tests/test-lib.h     | 143 ++++++++++++++++
 8 files changed, 785 insertions(+), 4 deletions(-)
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

diff --git a/Makefile b/Makefile
index e440728c24..18c13f06c0 100644
--- a/Makefile
+++ b/Makefile
@@ -682,6 +682,9 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests
+UNIT_TEST_BIN = $(UNIT_TEST_DIR)/bin
 
 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1331,6 +1334,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
 THIRD_PARTY_SOURCES += sha1collisiondetection/%
 THIRD_PARTY_SOURCES += sha1dc/%
 
+UNIT_TEST_PROGRAMS += t-basic
+UNIT_TEST_PROGRAMS += t-strbuf
+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_BIN)/%$X,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
+
 # xdiff and reftable libs may in turn depend on what is in libgit.a
 GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
 EXTLIBS =
@@ -2672,6 +2681,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(UNIT_TEST_OBJS)
 
 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3167,7 +3177,7 @@ endif
 
 test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
 
-all:: $(TEST_PROGRAMS) $(test_bindir_programs)
+all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
 
 bin-wrappers/%: wrap-for-bin.sh
 	$(call mkdir_p_parent_template)
@@ -3592,7 +3602,7 @@ endif
 
 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
-		$(MOFILES)
+		$(UNIT_TEST_PROGS) $(MOFILES)
 	$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
 		SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
 	test -n "$(ARTIFACTS_DIRECTORY)"
@@ -3653,7 +3663,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3831,3 +3841,15 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
 
 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_BIN):
+	@mkdir -p $(UNIT_TEST_BIN)
+
+$(UNIT_TEST_PROGS): $(UNIT_TEST_BIN)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS $(UNIT_TEST_BIN)
+	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGS)
+unit-tests: $(UNIT_TEST_PROGS)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..75d9330437 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -17,6 +17,7 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+DEFAULT_UNIT_TEST_TARGET ?= unit-tests-raw
 TEST_LINT ?= test-lint
 
 ifdef TEST_OUTPUT_DIRECTORY
@@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(filter-out unit-tests/bin/t-basic%,$(wildcard unit-tests/bin/t-*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +67,17 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
 
+$(UNIT_TESTS):
+	@echo "*** $@ ***"; $@
+
+.PHONY: unit-tests unit-tests-raw unit-tests-prove
+unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
+
+unit-tests-raw: $(UNIT_TESTS)
+
+unit-tests-prove:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
@@ -149,4 +162,4 @@ perf:
 	$(MAKE) -C perf/ all
 
 .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
-	check-chainlint clean-chainlint test-chainlint
+	check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
new file mode 100755
index 0000000000..961b54b06c
--- /dev/null
+++ b/t/t0080-unit-test-output.sh
@@ -0,0 +1,58 @@
+#!/bin/sh
+
+test_description='Test the output of the unit test framework'
+
+. ./test-lib.sh
+
+test_expect_success 'TAP output from unit tests' '
+	cat >expect <<-EOF &&
+	ok 1 - passing test
+	ok 2 - passing test and assertion return 1
+	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
+	#    left: 1
+	#   right: 2
+	not ok 3 - failing test
+	ok 4 - failing test and assertion return 0
+	not ok 5 - passing TEST_TODO() # TODO
+	ok 6 - passing TEST_TODO() returns 1
+	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
+	not ok 7 - failing TEST_TODO()
+	ok 8 - failing TEST_TODO() returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:30
+	# skipping test - missing prerequisite
+	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
+	ok 9 - test_skip() # SKIP
+	ok 10 - skipped test returns 1
+	# skipping test - missing prerequisite
+	ok 11 - test_skip() inside TEST_TODO() # SKIP
+	ok 12 - test_skip() inside TEST_TODO() returns 1
+	# check "0" failed at t/unit-tests/t-basic.c:48
+	not ok 13 - TEST_TODO() after failing check
+	ok 14 - TEST_TODO() after failing check returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:56
+	not ok 15 - failing check after TEST_TODO()
+	ok 16 - failing check after TEST_TODO() returns 0
+	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
+	#    left: "\011hello\\\\"
+	#   right: "there\"\012"
+	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
+	#    left: "NULL"
+	#   right: NULL
+	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
+	#    left: ${SQ}a${SQ}
+	#   right: ${SQ}\012${SQ}
+	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
+	#    left: ${SQ}\\\\${SQ}
+	#   right: ${SQ}\\${SQ}${SQ}
+	not ok 17 - messages from failing string and char comparison
+	# BUG: test has no checks at t/unit-tests/t-basic.c:91
+	not ok 18 - test with no checks
+	ok 19 - test with no checks returns 0
+	1..19
+	EOF
+
+	! "$GIT_BUILD_DIR"/t/unit-tests/bin/t-basic >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
new file mode 100644
index 0000000000..5e56e040ec
--- /dev/null
+++ b/t/unit-tests/.gitignore
@@ -0,0 +1 @@
+/bin
diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
new file mode 100644
index 0000000000..fda1ae59a6
--- /dev/null
+++ b/t/unit-tests/t-basic.c
@@ -0,0 +1,95 @@
+#include "test-lib.h"
+
+/*
+ * The purpose of this "unit test" is to verify a few invariants of the unit
+ * test framework itself, as well as to provide examples of output from actually
+ * failing tests. As such, it is intended that this test fails, and thus it
+ * should not be run as part of `make unit-tests`. Instead, we verify it behaves
+ * as expected in the integration test t0080-unit-test-output.sh
+ */
+
+/* Used to store the return value of check_int(). */
+static int check_res;
+
+/* Used to store the return value of TEST(). */
+static int test_res;
+
+static void t_res(int expect)
+{
+	check_int(check_res, ==, expect);
+	check_int(test_res, ==, expect);
+}
+
+static void t_todo(int x)
+{
+	check_res = TEST_TODO(check(x));
+}
+
+static void t_skip(void)
+{
+	check(0);
+	test_skip("missing prerequisite");
+	check(1);
+}
+
+static int do_skip(void)
+{
+	test_skip("missing prerequisite");
+	return 1;
+}
+
+static void t_skip_todo(void)
+{
+	check_res = TEST_TODO(do_skip());
+}
+
+static void t_todo_after_fail(void)
+{
+	check(0);
+	TEST_TODO(check(0));
+}
+
+static void t_fail_after_todo(void)
+{
+	check(1);
+	TEST_TODO(check(0));
+	check(0);
+}
+
+static void t_messages(void)
+{
+	check_str("\thello\\", "there\"\n");
+	check_str("NULL", NULL);
+	check_char('a', ==, '\n');
+	check_char('\\', ==, '\'');
+}
+
+static void t_empty(void)
+{
+	; /* empty */
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
+	TEST(t_res(1), "passing test and assertion return 1");
+	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
+	TEST(t_res(0), "failing test and assertion return 0");
+	test_res = TEST(t_todo(0), "passing TEST_TODO()");
+	TEST(t_res(1), "passing TEST_TODO() returns 1");
+	test_res = TEST(t_todo(1), "failing TEST_TODO()");
+	TEST(t_res(0), "failing TEST_TODO() returns 0");
+	test_res = TEST(t_skip(), "test_skip()");
+	TEST(check_int(test_res, ==, 1), "skipped test returns 1");
+	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
+	TEST(t_res(1), "test_skip() inside TEST_TODO() returns 1");
+	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
+	TEST(check_int(test_res, ==, 0), "TEST_TODO() after failing check returns 0");
+	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
+	TEST(check_int(test_res, ==, 0), "failing check after TEST_TODO() returns 0");
+	TEST(t_messages(), "messages from failing string and char comparison");
+	test_res = TEST(t_empty(), "test with no checks");
+	TEST(check_int(test_res, ==, 0), "test with no checks returns 0");
+
+	return test_done();
+}
diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
new file mode 100644
index 0000000000..c2fcb0cbd6
--- /dev/null
+++ b/t/unit-tests/t-strbuf.c
@@ -0,0 +1,120 @@
+#include "test-lib.h"
+#include "strbuf.h"
+
+/* wrapper that supplies tests with an empty, initialized strbuf */
+static void setup(void (*f)(struct strbuf*, void*), void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+}
+
+/* wrapper that supplies tests with a populated, initialized strbuf */
+static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	strbuf_addstr(&buf, init_str);
+	check_uint(buf.len, ==, strlen(init_str));
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+}
+
+static int assert_sane_strbuf(struct strbuf *buf)
+{
+	/* Initialized strbufs should always have a non-NULL buffer */
+	if (buf->buf == NULL)
+		return 0;
+	/* Buffers should always be NUL-terminated */
+	if (buf->buf[buf->len] != '\0')
+		return 0;
+	/*
+	 * Freshly-initialized strbufs may not have a dynamically allocated
+	 * buffer
+	 */
+	if (buf->len == 0 && buf->alloc == 0)
+		return 1;
+	/* alloc must be at least one byte larger than len */
+	return buf->len + 1 <= buf->alloc;
+}
+
+static void t_static_init(void)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_dynamic_init(void)
+{
+	struct strbuf buf;
+
+	strbuf_init(&buf, 1024);
+	check(assert_sane_strbuf(&buf));
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, >=, 1024);
+	check_char(buf.buf[0], ==, '\0');
+	strbuf_release(&buf);
+}
+
+static void t_addch(struct strbuf *buf, void *data)
+{
+	const char *p_ch = data;
+	const char ch = *p_ch;
+	size_t orig_alloc = buf->alloc;
+	size_t orig_len = buf->len;
+
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	strbuf_addch(buf, ch);
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	if (!(check_uint(buf->len, ==, orig_len + 1) &&
+	      check_uint(buf->alloc, >=, orig_alloc)))
+		return; /* avoid de-referencing buf->buf */
+	check_char(buf->buf[buf->len - 1], ==, ch);
+	check_char(buf->buf[buf->len], ==, '\0');
+}
+
+static void t_addstr(struct strbuf *buf, void *data)
+{
+	const char *text = data;
+	size_t len = strlen(text);
+	size_t orig_alloc = buf->alloc;
+	size_t orig_len = buf->len;
+
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	strbuf_addstr(buf, text);
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	if (!(check_uint(buf->len, ==, orig_len + len) &&
+	      check_uint(buf->alloc, >=, orig_alloc) &&
+	      check_uint(buf->alloc, >, orig_len + len) &&
+	      check_char(buf->buf[orig_len + len], ==, '\0')))
+	    return;
+	check_str(buf->buf + orig_len, text);
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	if (!TEST(t_static_init(), "static initialization works"))
+		test_skip_all("STRBUF_INIT is broken");
+	TEST(t_dynamic_init(), "dynamic initialization works");
+	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
+	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
+	TEST(setup_populated(t_addch, "initial value", "a"),
+	     "strbuf_addch appends to initial value");
+	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
+	TEST(setup_populated(t_addstr, "initial value", "hello there"),
+	     "strbuf_addstr appends string to initial value");
+
+	return test_done();
+}
diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
new file mode 100644
index 0000000000..b20f543121
--- /dev/null
+++ b/t/unit-tests/test-lib.c
@@ -0,0 +1,329 @@
+#include "test-lib.h"
+
+enum result {
+	RESULT_NONE,
+	RESULT_FAILURE,
+	RESULT_SKIP,
+	RESULT_SUCCESS,
+	RESULT_TODO
+};
+
+static struct {
+	enum result result;
+	int count;
+	unsigned failed :1;
+	unsigned lazy_plan :1;
+	unsigned running :1;
+	unsigned skip_all :1;
+	unsigned todo :1;
+} ctx = {
+	.lazy_plan = 1,
+	.result = RESULT_NONE,
+};
+
+static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
+{
+	fflush(stderr);
+	if (prefix)
+		fprintf(stdout, "%s", prefix);
+	vprintf(format, ap); /* TODO: handle newlines */
+	putc('\n', stdout);
+	fflush(stdout);
+}
+
+void test_msg(const char *format, ...)
+{
+	va_list ap;
+
+	va_start(ap, format);
+	msg_with_prefix("# ", format, ap);
+	va_end(ap);
+}
+
+void test_plan(int count)
+{
+	assert(!ctx.running);
+
+	fflush(stderr);
+	printf("1..%d\n", count);
+	fflush(stdout);
+	ctx.lazy_plan = 0;
+}
+
+int test_done(void)
+{
+	assert(!ctx.running);
+
+	if (ctx.lazy_plan)
+		test_plan(ctx.count);
+
+	return ctx.failed;
+}
+
+void test_skip(const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	if (format)
+		msg_with_prefix("# skipping test - ", format, ap);
+	va_end(ap);
+}
+
+void test_skip_all(const char *format, ...)
+{
+	va_list ap;
+	const char *prefix;
+
+	if (!ctx.count && ctx.lazy_plan) {
+		/* We have not printed a test plan yet */
+		prefix = "1..0 # SKIP ";
+		ctx.lazy_plan = 0;
+	} else {
+		/* We have already printed a test plan */
+		prefix = "Bail out! # ";
+		ctx.failed = 1;
+	}
+	ctx.skip_all = 1;
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	msg_with_prefix(prefix, format, ap);
+	va_end(ap);
+}
+
+int test__run_begin(void)
+{
+	assert(!ctx.running);
+
+	ctx.count++;
+	ctx.result = RESULT_NONE;
+	ctx.running = 1;
+
+	return ctx.skip_all;
+}
+
+static void print_description(const char *format, va_list ap)
+{
+	if (format) {
+		fputs(" - ", stdout);
+		vprintf(format, ap);
+	}
+}
+
+int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	fflush(stderr);
+	va_start(ap, format);
+	if (!ctx.skip_all) {
+		switch (ctx.result) {
+		case RESULT_SUCCESS:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_FAILURE:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_TODO:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # TODO");
+			break;
+
+		case RESULT_SKIP:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # SKIP");
+			break;
+
+		case RESULT_NONE:
+			test_msg("BUG: test has no checks at %s", location);
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			ctx.result = RESULT_FAILURE;
+			break;
+		}
+	}
+	va_end(ap);
+	ctx.running = 0;
+	if (ctx.skip_all)
+		return 1;
+	putc('\n', stdout);
+	fflush(stdout);
+	ctx.failed |= ctx.result == RESULT_FAILURE;
+
+	return ctx.result != RESULT_FAILURE;
+}
+
+static void test_fail(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	ctx.result = RESULT_FAILURE;
+}
+
+static void test_pass(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result == RESULT_NONE)
+		ctx.result = RESULT_SUCCESS;
+}
+
+static void test_todo(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result != RESULT_FAILURE)
+		ctx.result = RESULT_TODO;
+}
+
+int test_assert(const char *location, const char *check, int ok)
+{
+	assert(ctx.running);
+
+	if (ctx.result == RESULT_SKIP) {
+		test_msg("skipping check '%s' at %s", check, location);
+		return 1;
+	} else if (!ctx.todo) {
+		if (ok) {
+			test_pass();
+		} else {
+			test_msg("check \"%s\" failed at %s", check, location);
+			test_fail();
+		}
+	}
+
+	return !!ok;
+}
+
+void test__todo_begin(void)
+{
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	ctx.todo = 1;
+}
+
+int test__todo_end(const char *location, const char *check, int res)
+{
+	assert(ctx.running);
+	assert(ctx.todo);
+
+	ctx.todo = 0;
+	if (ctx.result == RESULT_SKIP)
+		return 1;
+	if (res) {
+		test_msg("todo check '%s' succeeded at %s", check, location);
+		test_fail();
+	} else {
+		test_todo();
+	}
+
+	return !res;
+}
+
+int check_bool_loc(const char *loc, const char *check, int ok)
+{
+	return test_assert(loc, check, ok);
+}
+
+union test__tmp test__tmp[2];
+
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		test_msg("   left: %"PRIdMAX, a);
+		test_msg("  right: %"PRIdMAX, b);
+	}
+
+	return ret;
+}
+
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		test_msg("   left: %"PRIuMAX, a);
+		test_msg("  right: %"PRIuMAX, b);
+	}
+
+	return ret;
+}
+
+static void print_one_char(char ch, char quote)
+{
+	if ((unsigned char)ch < 0x20u || ch == 0x7f) {
+		/* TODO: improve handling of \a, \b, \f ... */
+		printf("\\%03o", (unsigned char)ch);
+	} else {
+		if (ch == '\\' || ch == quote)
+			putc('\\', stdout);
+		putc(ch, stdout);
+	}
+}
+
+static void print_char(const char *prefix, char ch)
+{
+	printf("# %s: '", prefix);
+	print_one_char(ch, '\'');
+	fputs("'\n", stdout);
+}
+
+int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		fflush(stderr);
+		print_char("   left", a);
+		print_char("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
+
+static void print_str(const char *prefix, const char *str)
+{
+	printf("# %s: ", prefix);
+	if (!str) {
+		fputs("NULL\n", stdout);
+	} else {
+		putc('"', stdout);
+		while (*str)
+			print_one_char(*str++, '"');
+		fputs("\"\n", stdout);
+	}
+}
+
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b)
+{
+	int ok = (!a && !b) || (a && b && !strcmp(a, b));
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		fflush(stderr);
+		print_str("   left", a);
+		print_str("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
new file mode 100644
index 0000000000..8df3804914
--- /dev/null
+++ b/t/unit-tests/test-lib.h
@@ -0,0 +1,143 @@
+#ifndef TEST_LIB_H
+#define TEST_LIB_H
+
+#include "git-compat-util.h"
+
+/*
+ * Run a test function, returns 1 if the test succeeds, 0 if it
+ * fails. If test_skip_all() has been called then the test will not be
+ * run. The description for each test should be unique. For example:
+ *
+ *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
+ */
+#define TEST(t, ...)					\
+	test__run_end(test__run_begin() ? 0 : (t, 1),	\
+		      TEST_LOCATION(),  __VA_ARGS__)
+
+/*
+ * Print a test plan, should be called before any tests. If the number
+ * of tests is not known in advance test_done() will automatically
+ * print a plan at the end of the test program.
+ */
+void test_plan(int count);
+
+/*
+ * test_done() must be called at the end of main(). It will print the
+ * plan if plan() was not called at the beginning of the test program
+ * and returns the exit code for the test program.
+ */
+int test_done(void);
+
+/* Skip the current test. */
+__attribute__((format (printf, 1, 2)))
+void test_skip(const char *format, ...);
+
+/* Skip all remaining tests. */
+__attribute__((format (printf, 1, 2)))
+void test_skip_all(const char *format, ...);
+
+/* Print a diagnostic message to stdout. */
+__attribute__((format (printf, 1, 2)))
+void test_msg(const char *format, ...);
+
+/*
+ * Test checks are built around test_assert(). checks return 1 on
+ * success, 0 on failure. If any check fails then the test will
+ * fail. To create a custom check define a function that wraps
+ * test_assert() and a macro to wrap that function. For example:
+ *
+ *  static int check_oid_loc(const char *loc, const char *check,
+ *			     struct object_id *a, struct object_id *b)
+ *  {
+ *	    int res = test_assert(loc, check, oideq(a, b));
+ *
+ *	    if (res) {
+ *		    test_msg("   left: %s", oid_to_hex(a);
+ *		    test_msg("  right: %s", oid_to_hex(a);
+ *
+ *	    }
+ *	    return res;
+ *  }
+ *
+ *  #define check_oid(a, b) \
+ *	    check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
+ */
+int test_assert(const char *location, const char *check, int ok);
+
+/* Helper macro to pass the location to checks */
+#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
+
+/* Check a boolean condition. */
+#define check(x)				\
+	check_bool_loc(TEST_LOCATION(), #x, x)
+int check_bool_loc(const char *loc, const char *check, int ok);
+
+/*
+ * Compare two integers. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_int(a, op, b)						\
+	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
+	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+		       test__tmp[0].i op test__tmp[1].i, a, b))
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b);
+
+/*
+ * Compare two unsigned integers. Prints a message with the two values
+ * if the comparison fails. NB this is not thread safe.
+ */
+#define check_uint(a, op, b)						\
+	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
+	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].u op test__tmp[1].u, a, b))
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b);
+
+/*
+ * Compare two chars. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_char(a, op, b)						\
+	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
+	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].c op test__tmp[1].c, a, b))
+int check_char_loc(const char *loc, const char *check, int ok,
+		   char a, char b);
+
+/* Check whether two strings are equal. */
+#define check_str(a, b)							\
+	check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b);
+
+/*
+ * Wrap a check that is known to fail. If the check succeeds then the
+ * test will fail. Returns 1 if the check fails, 0 if it
+ * succeeds. For example:
+ *
+ *  TEST_TODO(check(0));
+ */
+#define TEST_TODO(check) \
+	(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
+
+/* Private helpers */
+
+#define TEST__STR(x) #x
+#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
+
+union test__tmp {
+	intmax_t i;
+	uintmax_t u;
+	char c;
+};
+
+extern union test__tmp test__tmp[2];
+
+int test__run_begin(void);
+__attribute__((format (printf, 3, 4)))
+int test__run_end(int, const char *, const char *, ...);
+void test__todo_begin(void);
+int test__todo_end(const char *, const char *, int);
+
+#endif /* TEST_LIB_H */
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v8 3/3] ci: run unit tests in CI
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
  2023-10-09 22:21     ` [PATCH v8 1/3] unit tests: Add a project plan document Josh Steadmon
  2023-10-09 22:21     ` [PATCH v8 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-10-09 22:21     ` Josh Steadmon
  2023-10-09 23:50     ` [PATCH v8 0/3] Add unit test framework and project plan Junio C Hamano
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-10-09 22:21 UTC (permalink / raw)
  To: git; +Cc: phillip.wood123, linusa, calvinwan, gitster, rsbecker

Run unit tests in both Cirrus and GitHub CI. For sharded CI instances
(currently just Windows on GitHub), run only on the first shard. This is
OK while we have only a single unit test executable, but we may wish to
distribute tests more evenly when we add new unit tests in the future.

We may also want to add more status output in our unit test framework,
so that we can do similar post-processing as in
ci/lib.sh:handle_failed_tests().

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 .cirrus.yml               | 2 +-
 ci/run-build-and-tests.sh | 2 ++
 ci/run-test-slice.sh      | 5 +++++
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 4860bebd32..b6280692d2 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -19,4 +19,4 @@ freebsd_12_task:
   build_script:
     - su git -c gmake
   test_script:
-    - su git -c 'gmake test'
+    - su git -c 'gmake DEFAULT_UNIT_TEST_TARGET=unit-tests-prove test unit-tests'
diff --git a/ci/run-build-and-tests.sh b/ci/run-build-and-tests.sh
index 2528f25e31..7a1466b868 100755
--- a/ci/run-build-and-tests.sh
+++ b/ci/run-build-and-tests.sh
@@ -50,6 +50,8 @@ if test -n "$run_tests"
 then
 	group "Run tests" make test ||
 	handle_failed_tests
+	group "Run unit tests" \
+		make DEFAULT_UNIT_TEST_TARGET=unit-tests-prove unit-tests
 fi
 check_unignored_build_artifacts
 
diff --git a/ci/run-test-slice.sh b/ci/run-test-slice.sh
index a3c67956a8..ae8094382f 100755
--- a/ci/run-test-slice.sh
+++ b/ci/run-test-slice.sh
@@ -15,4 +15,9 @@ group "Run tests" make --quiet -C t T="$(cd t &&
 	tr '\n' ' ')" ||
 handle_failed_tests
 
+# We only have one unit test at the moment, so run it in the first slice
+if [ "$1" == "0" ] ; then
+	group "Run unit tests" make --quiet -C t unit-tests-prove
+fi
+
 check_unignored_build_artifacts
-- 
2.42.0.609.gbb76f46606-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 0/3] Add unit test framework and project plan
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
                       ` (2 preceding siblings ...)
  2023-10-09 22:21     ` [PATCH v8 3/3] ci: run unit tests in CI Josh Steadmon
@ 2023-10-09 23:50     ` Junio C Hamano
  2023-10-19 15:21       ` [PATCH 0/3] CMake unit test fixups Phillip Wood
  2023-10-16 10:07     ` [PATCH v8 0/3] Add unit test framework and project plan phillip.wood123
  2023-10-27 20:26     ` Christian Couder
  5 siblings, 1 reply; 67+ messages in thread
From: Junio C Hamano @ 2023-10-09 23:50 UTC (permalink / raw)
  To: Josh Steadmon, Johannes Schindelin
  Cc: git, phillip.wood123, linusa, calvinwan, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> Changes in v8:
> - Flipped return values for TEST, TEST_TODO, and check_* macros &
>   functions. This makes it easier to reason about control flow for
>   patterns like:
>     if (check(some_condition)) { ... }
> - Moved unit test binaries to t/unit-tests/bin to simplify .gitignore
>   patterns.
> - Removed testing of some strbuf implementation details in t-strbuf.c
>
>
> Josh Steadmon (2):
>   unit tests: Add a project plan document
>   ci: run unit tests in CI
>
> Phillip Wood (1):
>   unit tests: add TAP unit test framework

Thank you, all.

The other topic to adjust for cmake by Dscho builds on this topic,
and it needs to be rebased on this updated round.  I think I did so
correctly, but because I use neither cmake or Windows, the result is
not even compile tested.  Sanity checking the result is very much
appreciated when I push out the result of today's integration cycle.

$ git log --oneline --first-parent --decorate master..js/doc-unit-tests-with-cmake
d0773c1331 (js/doc-unit-tests-with-cmake) cmake: handle also unit tests
192de6de57 cmake: use test names instead of full paths
e1d97bc4df cmake: fix typo in variable name
e07499b8a7 artifacts-tar: when including `.dll` files, don't forget the unit-tests
11264f8f42 unit-tests: do show relative file paths
6c76c5b32d unit-tests: do not mistake `.pdb` files for being executable
295af2ef26 cmake: also build unit tests
31c2361349 (js/doc-unit-tests) ci: run unit tests in CI
3a47942530 unit tests: add TAP unit test framework
eeea7d763a unit tests: add a project plan document


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-09 22:21     ` [PATCH v8 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-10-10  8:57       ` Oswald Buddenhagen
  2023-10-11 21:14         ` Josh Steadmon
  2023-10-27 20:12       ` Christian Couder
  1 sibling, 1 reply; 67+ messages in thread
From: Oswald Buddenhagen @ 2023-10-10  8:57 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On Mon, Oct 09, 2023 at 03:21:20PM -0700, Josh Steadmon wrote:
>+=== Comparison
>+
>+[format="csv",options="header",width="33%"]
>+|=====
>+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
>
the redundancy seems unnecessary; asciidoc should automatically use each 
target's section title as the xreflabel.

>+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
>+https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
>+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
>+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
>+https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
>+|=====
>+
i find this totally unreadable in its raw form.
consider user-defined document-attributes for specific cell contents.
externalizing the urls would probably help as well (i'm not sure how to 
do that best).

regards

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-10  8:57       ` Oswald Buddenhagen
@ 2023-10-11 21:14         ` Josh Steadmon
  2023-10-11 23:05           ` Oswald Buddenhagen
  0 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-10-11 21:14 UTC (permalink / raw)
  To: Oswald Buddenhagen
  Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On 2023.10.10 10:57, Oswald Buddenhagen wrote:
> On Mon, Oct 09, 2023 at 03:21:20PM -0700, Josh Steadmon wrote:
> > +=== Comparison
> > +
> > +[format="csv",options="header",width="33%"]
> > +|=====
> > +Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
> > 
> the redundancy seems unnecessary; asciidoc should automatically use each
> target's section title as the xreflabel.

Hmm, this doesn't seem to work for me. It only renders as
"[anchor-label]".


> > +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
> > +https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
> > +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
> > +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
> > +https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
> > +|=====
> > +
> i find this totally unreadable in its raw form.
> consider user-defined document-attributes for specific cell contents.
> externalizing the urls would probably help as well (i'm not sure how to do
> that best).

Ah yeah, user-defined attributes definitely cleans this up quite a bit.
Thanks for the tip!

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2/3] unit tests: add TAP unit test framework
  2023-10-09 22:21     ` [PATCH v8 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-10-11 21:42       ` Junio C Hamano
  2023-10-16 13:43       ` [PATCH v8 2.5/3] fixup! " Phillip Wood
  2023-10-27 20:15       ` [PATCH v8 2/3] " Christian Couder
  2 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-10-11 21:42 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, phillip.wood123, linusa, calvinwan, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> +	/* Initialized strbufs should always have a non-NULL buffer */
> +	if (buf->buf == NULL)
> +		return 0;

This upsets Coccinelle (equals-null).  I'll queue this on top for
now to work around CI breakage.

Thanks.

----- >8 -----
Subject: [PATCH] SQUASH???

Coccinelle suggested style fix.
---
 t/unit-tests/t-strbuf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
index c2fcb0cbd6..8388442426 100644
--- a/t/unit-tests/t-strbuf.c
+++ b/t/unit-tests/t-strbuf.c
@@ -28,7 +28,7 @@ static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, vo
 static int assert_sane_strbuf(struct strbuf *buf)
 {
 	/* Initialized strbufs should always have a non-NULL buffer */
-	if (buf->buf == NULL)
+	if (!buf->buf)
 		return 0;
 	/* Buffers should always be NUL-terminated */
 	if (buf->buf[buf->len] != '\0')
-- 
2.42.0-345-gaab89be2eb


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-11 21:14         ` Josh Steadmon
@ 2023-10-11 23:05           ` Oswald Buddenhagen
  2023-11-01 17:31             ` Josh Steadmon
  0 siblings, 1 reply; 67+ messages in thread
From: Oswald Buddenhagen @ 2023-10-11 23:05 UTC (permalink / raw)
  To: Josh Steadmon, git, phillip.wood123, linusa, calvinwan, gitster,
	rsbecker

On Wed, Oct 11, 2023 at 02:14:03PM -0700, Josh Steadmon wrote:
>On 2023.10.10 10:57, Oswald Buddenhagen wrote:
>> On Mon, Oct 09, 2023 at 03:21:20PM -0700, Josh Steadmon wrote:
>> > +=== Comparison
>> > +
>> > +[format="csv",options="header",width="33%"]
>> > +|=====
>> > +Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
>> > 
>> the redundancy seems unnecessary; asciidoc should automatically use each
>> target's section title as the xreflabel.
>
>Hmm, this doesn't seem to work for me. It only renders as
>"[anchor-label]".
>
i thought
https://docs.asciidoctor.org/asciidoc/latest/attributes/id/#customize-automatic-xreftext 
is pretty clear about it, though. maybe the actual tooling uses an older 
version of the spec? or is buggy? or the placement of the titles is 
incorrect? or this applies to different links or targets only? or am i 
misreading something? or ...?

regards

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 0/3] Add unit test framework and project plan
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
                       ` (3 preceding siblings ...)
  2023-10-09 23:50     ` [PATCH v8 0/3] Add unit test framework and project plan Junio C Hamano
@ 2023-10-16 10:07     ` phillip.wood123
  2023-11-01 23:09       ` Josh Steadmon
  2023-10-27 20:26     ` Christian Couder
  5 siblings, 1 reply; 67+ messages in thread
From: phillip.wood123 @ 2023-10-16 10:07 UTC (permalink / raw)
  To: Josh Steadmon, git; +Cc: linusa, calvinwan, gitster, rsbecker

Hi Josh

Thanks for the update

On 09/10/2023 23:21, Josh Steadmon wrote:
> In addition to reviewing the patches in this series, reviewers can help
> this series progress by chiming in on these remaining TODOs:
> - Figure out if we should split t-basic.c into multiple meta-tests, to
>    avoid merge conflicts and changes to expected text in
>    t0080-unit-test-output.sh.

I think it depends on how many new tests we think we're going to want to 
add here. I can see us adding a few more check_* macros (comparing 
object ids and arrays of bytes spring to mind) and wanting to test them 
here, but (perhaps naïvely) I don't expect huge amounts of churn here.

> - Figure out if we should de-duplicate assertions in t-strbuf.c at the
>    cost of making tests less self-contained and diagnostic output less
>    helpful.

In principle we could pass the location information along to any helper 
function, I'm not sure how easy that is at the moment. We can get 
reasonable error messages by using the check*() macros in the helper and 
wrapping the call to the helper with check() as well. For example

static int assert_sane_strbuf(struct strbuf *buf)
{
	/* Initialized strbufs should always have a non-NULL buffer */
	if (!check(!!buf->buf))
		return 0;
	/* Buffers should always be NUL-terminated */
	if (!check_char(buf->buf[buf->len], ==, '\0'))
		return 0;
	/*
	 * Freshly-initialized strbufs may not have a dynamically allocated
	 * buffer
	 */
	if (buf->len == 0 && buf->alloc == 0)
		return 1;
	/* alloc must be at least one byte larger than len */
	return check_uint(buf->len, <, buf->alloc);
}

and in the test function call it as

	check(assert_sane_strbuf(buf));

which gives error messages like

# check "buf->len < buf->alloc" failed at t/unit-tests/t-strbuf.c:43
#    left: 5
#   right: 0
# check "assert_sane_strbuf(&buf)" failed at t/unit-tests/t-strbuf.c:60

So we can see where assert_sane_strbuf() was called and which assertion 
in assert_sane_strbuf() failed.

> - Figure out if we should collect unit tests statistics similar to the
>    "counts" files for shell tests

Unless someone has an immediate need for that I'd be tempted to leave it 
wait until someone requests that data.

> - Decide if it's OK to wait on sharding unit tests across "sliced" CI
>    instances

Hopefully the unit tests will run fast enough that we don't need to 
worry about that in the early stages.

> - Provide guidelines for writing new unit tests

This is not a comprehensive list but we should recommend that

- tests avoid leaking resources so the leak sanitizer see if the code
   being tested has a resource leak.

- tests check that pointers are not NULL before deferencing them to
   avoid the whole program being taken down with SIGSEGV.

- tests are written with easy debugging in mind - i.e. good diagnostic
   messages. Hopefully the check* macros make that easy to do.

> Changes in v8:
> - Flipped return values for TEST, TEST_TODO, and check_* macros &
>    functions. This makes it easier to reason about control flow for
>    patterns like:
>      if (check(some_condition)) { ... } > - Moved unit test binaries to t/unit-tests/bin to simplify .gitignore
>    patterns.

Thanks for the updates to the test library, the range diff looks good to me.

 > - Removed testing of some strbuf implementation details in t-strbuf.c

I agree that makes sense. I think it would be good to update 
assert_sane_strbuf() to use the check* macros as suggest above.

Best Wishes

Phillip

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v8 2.5/3] fixup! unit tests: add TAP unit test framework
  2023-10-09 22:21     ` [PATCH v8 2/3] unit tests: add TAP unit test framework Josh Steadmon
  2023-10-11 21:42       ` Junio C Hamano
@ 2023-10-16 13:43       ` Phillip Wood
  2023-10-16 16:41         ` Junio C Hamano
  2023-11-01 17:54         ` Josh Steadmon
  2023-10-27 20:15       ` [PATCH v8 2/3] " Christian Couder
  2 siblings, 2 replies; 67+ messages in thread
From: Phillip Wood @ 2023-10-16 13:43 UTC (permalink / raw)
  To: steadmon; +Cc: calvinwan, git, gitster, linusa, phillip.wood123, rsbecker

From: Phillip Wood <phillip.wood@dunelm.org.uk>

Here are a couple of cleanups for the unit test framework that I
noticed.

Update the documentation of the example custom check to reflect the
change in return value of test_assert() and mention that
checks should be careful when dereferencing pointer arguments.

Also avoid evaluating macro augments twice in check_int() and
friends. The global variable test__tmp was introduced to avoid
evaluating the arguments to these macros more than once but the macros
failed to use it when passing the values being compared to
check_int_loc().

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
---
 t/unit-tests/test-lib.h | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
index 8df3804914..a8f07ae0b7 100644
--- a/t/unit-tests/test-lib.h
+++ b/t/unit-tests/test-lib.h
@@ -42,18 +42,21 @@ void test_msg(const char *format, ...);
 
 /*
  * Test checks are built around test_assert(). checks return 1 on
- * success, 0 on failure. If any check fails then the test will
- * fail. To create a custom check define a function that wraps
- * test_assert() and a macro to wrap that function. For example:
+ * success, 0 on failure. If any check fails then the test will fail. To
+ * create a custom check define a function that wraps test_assert() and
+ * a macro to wrap that function to provide a source location and
+ * stringified arguments. Custom checks that take pointer arguments
+ * should be careful to check that they are non-NULL before
+ * dereferencing them. For example:
  *
  *  static int check_oid_loc(const char *loc, const char *check,
  *			     struct object_id *a, struct object_id *b)
  *  {
- *	    int res = test_assert(loc, check, oideq(a, b));
+ *	    int res = test_assert(loc, check, a && b && oideq(a, b));
  *
- *	    if (res) {
- *		    test_msg("   left: %s", oid_to_hex(a);
- *		    test_msg("  right: %s", oid_to_hex(a);
+ *	    if (!res) {
+ *		    test_msg("   left: %s", a ? oid_to_hex(a) : "NULL";
+ *		    test_msg("  right: %s", b ? oid_to_hex(a) : "NULL";
  *
  *	    }
  *	    return res;
@@ -79,7 +82,8 @@ int check_bool_loc(const char *loc, const char *check, int ok);
 #define check_int(a, op, b)						\
 	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
 	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
-		       test__tmp[0].i op test__tmp[1].i, a, b))
+		       test__tmp[0].i op test__tmp[1].i,		\
+		       test__tmp[0].i, test__tmp[1].i))
 int check_int_loc(const char *loc, const char *check, int ok,
 		  intmax_t a, intmax_t b);
 
@@ -90,7 +94,8 @@ int check_int_loc(const char *loc, const char *check, int ok,
 #define check_uint(a, op, b)						\
 	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
 	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
-			test__tmp[0].u op test__tmp[1].u, a, b))
+			test__tmp[0].u op test__tmp[1].u,		\
+			test__tmp[0].u, test__tmp[1].u))
 int check_uint_loc(const char *loc, const char *check, int ok,
 		   uintmax_t a, uintmax_t b);
 
@@ -101,7 +106,8 @@ int check_uint_loc(const char *loc, const char *check, int ok,
 #define check_char(a, op, b)						\
 	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
 	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
-			test__tmp[0].c op test__tmp[1].c, a, b))
+			test__tmp[0].c op test__tmp[1].c,		\
+			test__tmp[0].c, test__tmp[1].c))
 int check_char_loc(const char *loc, const char *check, int ok,
 		   char a, char b);
 
-- 
2.42.0.506.g0dd4464cfd3


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2.5/3] fixup! unit tests: add TAP unit test framework
  2023-10-16 13:43       ` [PATCH v8 2.5/3] fixup! " Phillip Wood
@ 2023-10-16 16:41         ` Junio C Hamano
  2023-11-01 17:54           ` Josh Steadmon
  2023-11-01 17:54         ` Josh Steadmon
  1 sibling, 1 reply; 67+ messages in thread
From: Junio C Hamano @ 2023-10-16 16:41 UTC (permalink / raw)
  To: steadmon; +Cc: Phillip Wood, calvinwan, git, linusa, rsbecker

Phillip Wood <phillip.wood123@gmail.com> writes:

> From: Phillip Wood <phillip.wood@dunelm.org.uk>
>
> Here are a couple of cleanups for the unit test framework that I
> noticed.

Thanks.  I trust that this will be squashed into the next update,
but in the meantime, I'll include it in the copy of the series I
have (without squashing).  Here is another one I noticed.

----- >8 --------- >8 --------- >8 -----
Subject: [PATCH] fixup! ci: run unit tests in CI

A CI job failed due to contrib/coccinelle/equals-null.cocci
and suggested this change, which seems sensible.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
 t/unit-tests/t-strbuf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
index c2fcb0cbd6..8388442426 100644
--- a/t/unit-tests/t-strbuf.c
+++ b/t/unit-tests/t-strbuf.c
@@ -28,7 +28,7 @@ static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, vo
 static int assert_sane_strbuf(struct strbuf *buf)
 {
 	/* Initialized strbufs should always have a non-NULL buffer */
-	if (buf->buf == NULL)
+	if (!buf->buf)
 		return 0;
 	/* Buffers should always be NUL-terminated */
 	if (buf->buf[buf->len] != '\0')
-- 
2.42.0-398-ga9ecda2788


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH 0/3] CMake unit test fixups
  2023-10-09 23:50     ` [PATCH v8 0/3] Add unit test framework and project plan Junio C Hamano
@ 2023-10-19 15:21       ` Phillip Wood
  2023-10-19 15:21         ` [PATCH 1/3] fixup! cmake: also build unit tests Phillip Wood
                           ` (3 more replies)
  0 siblings, 4 replies; 67+ messages in thread
From: Phillip Wood @ 2023-10-19 15:21 UTC (permalink / raw)
  To: gitster
  Cc: calvinwan, git, johannes.schindelin, linusa, phillip.wood123,
	rsbecker, steadmon

From: Phillip Wood <phillip.wood@dunelm.org.uk>

Hi Junio

> The other topic to adjust for cmake by Dscho builds on this topic,
> and it needs to be rebased on this updated round.  I think I did so
> correctly, but because I use neither cmake or Windows, the result is
> not even compile tested.  Sanity checking the result is very much
> appreciated when I push out the result of today's integration cycle.

I need these fixups to get our CI to successfully build an run the
unit tests using CMake & MSVC. They are all adjusting paths now that
the unit test programs are built in t/unit-tests/bin

You can see the unit tests passing at
https://github.com/phillipwood/git/actions/runs/6575606322/job/17863460719
Note that I have split up the patches since that run but the changes
are the same.

Best Wishes

Phillip


Phillip Wood (3):
  fixup! cmake: also build unit tests
  fixup! artifacts-tar: when including `.dll` files, don't forget the
    unit-tests
  fixup! cmake: handle also unit tests

 Makefile                            | 2 +-
 contrib/buildsystems/CMakeLists.txt | 8 ++++----
 2 files changed, 5 insertions(+), 5 deletions(-)

-- 
2.42.0.506.g0dd4464cfd3


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH 1/3] fixup! cmake: also build unit tests
  2023-10-19 15:21       ` [PATCH 0/3] CMake unit test fixups Phillip Wood
@ 2023-10-19 15:21         ` Phillip Wood
  2023-10-19 15:21         ` [PATCH 2/3] fixup! artifacts-tar: when including `.dll` files, don't forget the unit-tests Phillip Wood
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 67+ messages in thread
From: Phillip Wood @ 2023-10-19 15:21 UTC (permalink / raw)
  To: gitster
  Cc: calvinwan, git, johannes.schindelin, linusa, phillip.wood123,
	rsbecker, steadmon

From: Phillip Wood <phillip.wood@dunelm.org.uk>

---
 contrib/buildsystems/CMakeLists.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index d21835ca65..20f38e94c9 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -973,12 +973,12 @@ foreach(unit_test ${unit_test_PROGRAMS})
 	add_executable("${unit_test}" "${CMAKE_SOURCE_DIR}/t/unit-tests/${unit_test}.c")
 	target_link_libraries("${unit_test}" unit-test-lib common-main)
 	set_target_properties("${unit_test}"
-		PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/t/unit-tests)
+		PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/t/unit-tests/bin)
 	if(MSVC)
 		set_target_properties("${unit_test}"
-			PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG ${CMAKE_BINARY_DIR}/t/unit-tests)
+			PROPERTIES RUNTIME_OUTPUT_DIRECTORY_DEBUG ${CMAKE_BINARY_DIR}/t/unit-tests/bin)
 		set_target_properties("${unit_test}"
-			PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_BINARY_DIR}/t/unit-tests)
+			PROPERTIES RUNTIME_OUTPUT_DIRECTORY_RELEASE ${CMAKE_BINARY_DIR}/t/unit-tests/bin)
 	endif()
 	list(APPEND PROGRAMS_BUILT "${unit_test}")
 
-- 
2.42.0.506.g0dd4464cfd3


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH 2/3] fixup! artifacts-tar: when including `.dll` files, don't forget the unit-tests
  2023-10-19 15:21       ` [PATCH 0/3] CMake unit test fixups Phillip Wood
  2023-10-19 15:21         ` [PATCH 1/3] fixup! cmake: also build unit tests Phillip Wood
@ 2023-10-19 15:21         ` Phillip Wood
  2023-10-19 15:21         ` [PATCH 3/3] fixup! cmake: handle also unit tests Phillip Wood
  2023-10-19 19:19         ` [PATCH 0/3] CMake unit test fixups Junio C Hamano
  3 siblings, 0 replies; 67+ messages in thread
From: Phillip Wood @ 2023-10-19 15:21 UTC (permalink / raw)
  To: gitster
  Cc: calvinwan, git, johannes.schindelin, linusa, phillip.wood123,
	rsbecker, steadmon

From: Phillip Wood <phillip.wood@dunelm.org.uk>

---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 075d8e4899..e15c34e506 100644
--- a/Makefile
+++ b/Makefile
@@ -3597,7 +3597,7 @@ rpm::
 .PHONY: rpm
 
 ifneq ($(INCLUDE_DLLS_IN_ARTIFACTS),)
-OTHER_PROGRAMS += $(shell echo *.dll t/helper/*.dll t/unit-tests/*.dll)
+OTHER_PROGRAMS += $(shell echo *.dll t/helper/*.dll t/unit-tests/bin/*.dll)
 endif
 
 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
-- 
2.42.0.506.g0dd4464cfd3


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH 3/3] fixup! cmake: handle also unit tests
  2023-10-19 15:21       ` [PATCH 0/3] CMake unit test fixups Phillip Wood
  2023-10-19 15:21         ` [PATCH 1/3] fixup! cmake: also build unit tests Phillip Wood
  2023-10-19 15:21         ` [PATCH 2/3] fixup! artifacts-tar: when including `.dll` files, don't forget the unit-tests Phillip Wood
@ 2023-10-19 15:21         ` Phillip Wood
  2023-10-19 19:19         ` [PATCH 0/3] CMake unit test fixups Junio C Hamano
  3 siblings, 0 replies; 67+ messages in thread
From: Phillip Wood @ 2023-10-19 15:21 UTC (permalink / raw)
  To: gitster
  Cc: calvinwan, git, johannes.schindelin, linusa, phillip.wood123,
	rsbecker, steadmon

From: Phillip Wood <phillip.wood@dunelm.org.uk>

---
 contrib/buildsystems/CMakeLists.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt
index 20f38e94c9..671c7ead75 100644
--- a/contrib/buildsystems/CMakeLists.txt
+++ b/contrib/buildsystems/CMakeLists.txt
@@ -990,7 +990,7 @@ foreach(unit_test ${unit_test_PROGRAMS})
 	if(NOT ${unit_test} STREQUAL "t-basic")
 		add_test(NAME "t.unit-tests.${unit_test}"
 			COMMAND "./${unit_test}"
-			WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}/t/unit-tests)
+			WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}/t/unit-tests/bin)
 	endif()
 endforeach()
 
-- 
2.42.0.506.g0dd4464cfd3


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH 0/3] CMake unit test fixups
  2023-10-19 15:21       ` [PATCH 0/3] CMake unit test fixups Phillip Wood
                           ` (2 preceding siblings ...)
  2023-10-19 15:21         ` [PATCH 3/3] fixup! cmake: handle also unit tests Phillip Wood
@ 2023-10-19 19:19         ` Junio C Hamano
  3 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-10-19 19:19 UTC (permalink / raw)
  To: Phillip Wood
  Cc: calvinwan, git, johannes.schindelin, linusa, rsbecker, steadmon

Phillip Wood <phillip.wood123@gmail.com> writes:

> I need these fixups to get our CI to successfully build an run the
> unit tests using CMake & MSVC. They are all adjusting paths now that
> the unit test programs are built in t/unit-tests/bin

Thanks!  Very much appreciated.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-09 22:21     ` [PATCH v8 1/3] unit tests: Add a project plan document Josh Steadmon
  2023-10-10  8:57       ` Oswald Buddenhagen
@ 2023-10-27 20:12       ` Christian Couder
  2023-11-01 17:47         ` Josh Steadmon
  1 sibling, 1 reply; 67+ messages in thread
From: Christian Couder @ 2023-10-27 20:12 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On Tue, Oct 10, 2023 at 12:22 AM Josh Steadmon <steadmon@google.com> wrote:
>
> In our current testing environment, we spend a significant amount of
> effort crafting end-to-end tests for error conditions that could easily
> be captured by unit tests (or we simply forgo some hard-to-setup and
> rare error conditions). Describe what we hope to accomplish by
> implementing unit tests, and explain some open questions and milestones.
> Discuss desired features for test frameworks/harnesses, and provide a
> preliminary comparison of several different frameworks.

Nit: Not sure why the test framework comparison is "preliminary" as we
have actually selected a unit test framework and are adding it in the
next patch of the series. I understand that this was perhaps written
before the choice was made, but maybe we might want to update that
now.

> diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> new file mode 100644
> index 0000000000..b7a89cc838
> --- /dev/null
> +++ b/Documentation/technical/unit-tests.txt
> @@ -0,0 +1,220 @@
> += Unit Testing
> +
> +In our current testing environment, we spend a significant amount of effort
> +crafting end-to-end tests for error conditions that could easily be captured by
> +unit tests (or we simply forgo some hard-to-setup and rare error conditions).
> +Unit tests additionally provide stability to the codebase and can simplify
> +debugging through isolation. Writing unit tests in pure C, rather than with our
> +current shell/test-tool helper setup, simplifies test setup, simplifies passing
> +data around (no shell-isms required), and reduces testing runtime by not
> +spawning a separate process for every test invocation.
> +
> +We believe that a large body of unit tests, living alongside the existing test
> +suite, will improve code quality for the Git project.

I agree with that.

> +== Choosing a framework
> +
> +We believe the best option is to implement a custom TAP framework for the Git
> +project. We use a version of the framework originally proposed in
> +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].

Nit: Logically I would think that our opinion should come after the
comparison and be backed by it.

> +== Choosing a test harness
> +
> +During upstream discussion, it was occasionally noted that `prove` provides many
> +convenient features, such as scheduling slower tests first, or re-running
> +previously failed tests.
> +
> +While we already support the use of `prove` as a test harness for the shell
> +tests, it is not strictly required. The t/Makefile allows running shell tests
> +directly (though with interleaved output if parallelism is enabled). Git
> +developers who wish to use `prove` as a more advanced harness can do so by
> +setting DEFAULT_TEST_TARGET=prove in their config.mak.
> +
> +We will follow a similar approach for unit tests: by default the test
> +executables will be run directly from the t/Makefile, but `prove` can be
> +configured with DEFAULT_UNIT_TEST_TARGET=prove.

Nice that it can be used.

The rest of the file looks good.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2/3] unit tests: add TAP unit test framework
  2023-10-09 22:21     ` [PATCH v8 2/3] unit tests: add TAP unit test framework Josh Steadmon
  2023-10-11 21:42       ` Junio C Hamano
  2023-10-16 13:43       ` [PATCH v8 2.5/3] fixup! " Phillip Wood
@ 2023-10-27 20:15       ` Christian Couder
  2023-11-01 22:54         ` Josh Steadmon
  2 siblings, 1 reply; 67+ messages in thread
From: Christian Couder @ 2023-10-27 20:15 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On Tue, Oct 10, 2023 at 12:22 AM Josh Steadmon <steadmon@google.com> wrote:
>
> From: Phillip Wood <phillip.wood@dunelm.org.uk>
>
> This patch contains an implementation for writing unit tests with TAP
> output. Each test is a function that contains one or more checks. The
> test is run with the TEST() macro and if any of the checks fail then the
> test will fail. A complete program that tests STRBUF_INIT would look
> like
>
>      #include "test-lib.h"
>      #include "strbuf.h"
>
>      static void t_static_init(void)
>      {
>              struct strbuf buf = STRBUF_INIT;
>
>              check_uint(buf.len, ==, 0);
>              check_uint(buf.alloc, ==, 0);
>              check_char(buf.buf[0], ==, '\0');
>      }
>
>      int main(void)
>      {
>              TEST(t_static_init(), "static initialization works);
>
>              return test_done();
>      }
>
> The output of this program would be
>
>      ok 1 - static initialization works
>      1..1
>
> If any of the checks in a test fail then they print a diagnostic message
> to aid debugging and the test will be reported as failing. For example a
> failing integer check would look like
>
>      # check "x >= 3" failed at my-test.c:102

I wonder if it would be a bit better to say that the test was an
integer test for example with "check_int(x >= 3) failed ..."

>      #    left: 2
>      #   right: 3

I like "expected" and "actual" better than "left" and "right", not
sure how it's possible to have that in a way consistent with the shell
tests though.

>      not ok 1 - x is greater than or equal to three
>
> There are a number of check functions implemented so far. check() checks
> a boolean condition, check_int(), check_uint() and check_char() take two
> values to compare and a comparison operator. check_str() will check if
> two strings are equal. Custom checks are simple to implement as shown in
> the comments above test_assert() in test-lib.h.

Yeah, nice.

> Tests can be skipped with test_skip() which can be supplied with a
> reason for skipping which it will print. Tests can print diagnostic
> messages with test_msg().  Checks that are known to fail can be wrapped
> in TEST_TODO().

Maybe TEST_TOFIX() would be a bit more clear, but "TODO" is something
that is more likely to be searched for than "TOFIX", so Ok.

> There are a couple of example test programs included in this
> patch. t-basic.c implements some self-tests and demonstrates the
> diagnostic output for failing test. The output of this program is
> checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
> unit tests for strbuf.c
>
> The unit tests will be built as part of the default "make all" target,
> to avoid bitrot. If you wish to build just the unit tests, you can run
> "make build-unit-tests". To run the tests, you can use "make unit-tests"
> or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf".

Nice!

> +unit-tests-prove:
> +       @echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)

Nice, but DEFAULT_TEST_TARGET=prove isn't used. So not sure how
important or relevant the 'prove' related sections are in the
Documentation/technical/unit-tests.txt file introduced by the previous
patch.


> +int test_assert(const char *location, const char *check, int ok)
> +{
> +       assert(ctx.running);
> +
> +       if (ctx.result == RESULT_SKIP) {
> +               test_msg("skipping check '%s' at %s", check, location);
> +               return 1;
> +       } else if (!ctx.todo) {

I think it would be a bit clearer without the "else" above and with
the "if (!ctx.todo) {" starting on a new line.

> +               if (ok) {
> +                       test_pass();
> +               } else {
> +                       test_msg("check \"%s\" failed at %s", check, location);
> +                       test_fail();
> +               }
> +       }
> +
> +       return !!ok;
> +}

Otherwise it looks good to me.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 0/3] Add unit test framework and project plan
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
                       ` (4 preceding siblings ...)
  2023-10-16 10:07     ` [PATCH v8 0/3] Add unit test framework and project plan phillip.wood123
@ 2023-10-27 20:26     ` Christian Couder
  5 siblings, 0 replies; 67+ messages in thread
From: Christian Couder @ 2023-10-27 20:26 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On Tue, Oct 10, 2023 at 12:22 AM Josh Steadmon <steadmon@google.com> wrote:
>
> In our current testing environment, we spend a significant amount of
> effort crafting end-to-end tests for error conditions that could easily
> be captured by unit tests (or we simply forgo some hard-to-setup and
> rare error conditions). Unit tests additionally provide stability to the
> codebase and can simplify debugging through isolation. Turning parts of
> Git into libraries[1] gives us the ability to run unit tests on the
> libraries and to write unit tests in C. Writing unit tests in pure C,
> rather than with our current shell/test-tool helper setup, simplifies
> test setup, simplifies passing data around (no shell-isms required), and
> reduces testing runtime by not spawning a separate process for every
> test invocation.
>
> This series begins with a project document covering our goals for adding
> unit tests and a discussion of alternative frameworks considered, as
> well as the features used to evaluate them. A rendered preview of this
> doc can be found at [2]. It also adds Phillip Wood's TAP implemenation
> (with some slightly re-worked Makefile rules) and a sample strbuf unit
> test. Finally, we modify the configs for GitHub and Cirrus CI to run the
> unit tests. Sample runs showing successful CI runs can be found at [3],
> [4], and [5].

I took a look at this version and I like it very much. I left a few
comments. My only real wish would be to use something like "actual"
and "expected" instead of "left" and "right" in case of test failure
and perhaps in the argument names of functions and macros. Not sure it
is easy to do in the same way as in the shell framework though.

Thanks!

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-11 23:05           ` Oswald Buddenhagen
@ 2023-11-01 17:31             ` Josh Steadmon
  0 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 17:31 UTC (permalink / raw)
  To: Oswald Buddenhagen
  Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On 2023.10.12 01:05, Oswald Buddenhagen wrote:
> On Wed, Oct 11, 2023 at 02:14:03PM -0700, Josh Steadmon wrote:
> > On 2023.10.10 10:57, Oswald Buddenhagen wrote:
> > > On Mon, Oct 09, 2023 at 03:21:20PM -0700, Josh Steadmon wrote:
> > > > +=== Comparison
> > > > +
> > > > +[format="csv",options="header",width="33%"]
> > > > +|=====
> > > > +Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
> > > > the redundancy seems unnecessary; asciidoc should automatically
> > > use each
> > > target's section title as the xreflabel.
> > 
> > Hmm, this doesn't seem to work for me. It only renders as
> > "[anchor-label]".
> > 
> i thought
> https://docs.asciidoctor.org/asciidoc/latest/attributes/id/#customize-automatic-xreftext
> is pretty clear about it, though. maybe the actual tooling uses an older
> version of the spec? or is buggy? or the placement of the titles is
> incorrect? or this applies to different links or targets only? or am i
> misreading something? or ...?
> 
> regards

I think the issue may be that asciidoc is the default formatter in
Documentation/Makefile, not asciidoctor.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-10-27 20:12       ` Christian Couder
@ 2023-11-01 17:47         ` Josh Steadmon
  2023-11-01 23:49           ` Junio C Hamano
  0 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 17:47 UTC (permalink / raw)
  To: Christian Couder
  Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On 2023.10.27 22:12, Christian Couder wrote:
> On Tue, Oct 10, 2023 at 12:22 AM Josh Steadmon <steadmon@google.com> wrote:
> >
> > In our current testing environment, we spend a significant amount of
> > effort crafting end-to-end tests for error conditions that could easily
> > be captured by unit tests (or we simply forgo some hard-to-setup and
> > rare error conditions). Describe what we hope to accomplish by
> > implementing unit tests, and explain some open questions and milestones.
> > Discuss desired features for test frameworks/harnesses, and provide a
> > preliminary comparison of several different frameworks.
> 
> Nit: Not sure why the test framework comparison is "preliminary" as we
> have actually selected a unit test framework and are adding it in the
> next patch of the series. I understand that this was perhaps written
> before the choice was made, but maybe we might want to update that
> now.

Fixed in v9, thanks.


> > diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> > new file mode 100644
> > index 0000000000..b7a89cc838
> > --- /dev/null
> > +++ b/Documentation/technical/unit-tests.txt
> > @@ -0,0 +1,220 @@
> > += Unit Testing
> > +
> > +In our current testing environment, we spend a significant amount of effort
> > +crafting end-to-end tests for error conditions that could easily be captured by
> > +unit tests (or we simply forgo some hard-to-setup and rare error conditions).
> > +Unit tests additionally provide stability to the codebase and can simplify
> > +debugging through isolation. Writing unit tests in pure C, rather than with our
> > +current shell/test-tool helper setup, simplifies test setup, simplifies passing
> > +data around (no shell-isms required), and reduces testing runtime by not
> > +spawning a separate process for every test invocation.
> > +
> > +We believe that a large body of unit tests, living alongside the existing test
> > +suite, will improve code quality for the Git project.
> 
> I agree with that.
> 
> > +== Choosing a framework
> > +
> > +We believe the best option is to implement a custom TAP framework for the Git
> > +project. We use a version of the framework originally proposed in
> > +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
> 
> Nit: Logically I would think that our opinion should come after the
> comparison and be backed by it.

I intended this to be a quick summary for those who don't want to read
the whole doc. I clarified that and added a link to the selection
rationale.


> > +== Choosing a test harness
> > +
> > +During upstream discussion, it was occasionally noted that `prove` provides many
> > +convenient features, such as scheduling slower tests first, or re-running
> > +previously failed tests.
> > +
> > +While we already support the use of `prove` as a test harness for the shell
> > +tests, it is not strictly required. The t/Makefile allows running shell tests
> > +directly (though with interleaved output if parallelism is enabled). Git
> > +developers who wish to use `prove` as a more advanced harness can do so by
> > +setting DEFAULT_TEST_TARGET=prove in their config.mak.
> > +
> > +We will follow a similar approach for unit tests: by default the test
> > +executables will be run directly from the t/Makefile, but `prove` can be
> > +configured with DEFAULT_UNIT_TEST_TARGET=prove.
> 
> Nice that it can be used.
> 
> The rest of the file looks good.

Thanks for the review!

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2.5/3] fixup! unit tests: add TAP unit test framework
  2023-10-16 13:43       ` [PATCH v8 2.5/3] fixup! " Phillip Wood
  2023-10-16 16:41         ` Junio C Hamano
@ 2023-11-01 17:54         ` Josh Steadmon
  2023-11-01 23:49           ` Junio C Hamano
  1 sibling, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 17:54 UTC (permalink / raw)
  To: Phillip Wood; +Cc: calvinwan, git, gitster, linusa, phillip.wood123, rsbecker

On 2023.10.16 14:43, Phillip Wood wrote:
> From: Phillip Wood <phillip.wood@dunelm.org.uk>
> 
> Here are a couple of cleanups for the unit test framework that I
> noticed.
> 
> Update the documentation of the example custom check to reflect the
> change in return value of test_assert() and mention that
> checks should be careful when dereferencing pointer arguments.
> 
> Also avoid evaluating macro augments twice in check_int() and
> friends. The global variable test__tmp was introduced to avoid
> evaluating the arguments to these macros more than once but the macros
> failed to use it when passing the values being compared to
> check_int_loc().
> 
> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
> ---
>  t/unit-tests/test-lib.h | 26 ++++++++++++++++----------
>  1 file changed, 16 insertions(+), 10 deletions(-)

Applied in v9, thanks!

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2.5/3] fixup! unit tests: add TAP unit test framework
  2023-10-16 16:41         ` Junio C Hamano
@ 2023-11-01 17:54           ` Josh Steadmon
  2023-11-01 23:48             ` Junio C Hamano
  0 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 17:54 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Phillip Wood, calvinwan, git, linusa, rsbecker

On 2023.10.16 09:41, Junio C Hamano wrote:
> Phillip Wood <phillip.wood123@gmail.com> writes:
> 
> > From: Phillip Wood <phillip.wood@dunelm.org.uk>
> >
> > Here are a couple of cleanups for the unit test framework that I
> > noticed.
> 
> Thanks.  I trust that this will be squashed into the next update,
> but in the meantime, I'll include it in the copy of the series I
> have (without squashing).  Here is another one I noticed.
> 
> ----- >8 --------- >8 --------- >8 -----
> Subject: [PATCH] fixup! ci: run unit tests in CI
> 
> A CI job failed due to contrib/coccinelle/equals-null.cocci
> and suggested this change, which seems sensible.
> 
> Signed-off-by: Junio C Hamano <gitster@pobox.com>
> ---
>  t/unit-tests/t-strbuf.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Applied in v9, thanks!

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2/3] unit tests: add TAP unit test framework
  2023-10-27 20:15       ` [PATCH v8 2/3] " Christian Couder
@ 2023-11-01 22:54         ` Josh Steadmon
  0 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 22:54 UTC (permalink / raw)
  To: Christian Couder
  Cc: git, phillip.wood123, linusa, calvinwan, gitster, rsbecker

On 2023.10.27 22:15, Christian Couder wrote:
> On Tue, Oct 10, 2023 at 12:22 AM Josh Steadmon <steadmon@google.com> wrote:
> >
> > From: Phillip Wood <phillip.wood@dunelm.org.uk>
> >
> > This patch contains an implementation for writing unit tests with TAP
> > output. Each test is a function that contains one or more checks. The
> > test is run with the TEST() macro and if any of the checks fail then the
> > test will fail. A complete program that tests STRBUF_INIT would look
> > like
> >
> >      #include "test-lib.h"
> >      #include "strbuf.h"
> >
> >      static void t_static_init(void)
> >      {
> >              struct strbuf buf = STRBUF_INIT;
> >
> >              check_uint(buf.len, ==, 0);
> >              check_uint(buf.alloc, ==, 0);
> >              check_char(buf.buf[0], ==, '\0');
> >      }
> >
> >      int main(void)
> >      {
> >              TEST(t_static_init(), "static initialization works);
> >
> >              return test_done();
> >      }
> >
> > The output of this program would be
> >
> >      ok 1 - static initialization works
> >      1..1
> >
> > If any of the checks in a test fail then they print a diagnostic message
> > to aid debugging and the test will be reported as failing. For example a
> > failing integer check would look like
> >
> >      # check "x >= 3" failed at my-test.c:102
> 
> I wonder if it would be a bit better to say that the test was an
> integer test for example with "check_int(x >= 3) failed ..."
> 
> >      #    left: 2
> >      #   right: 3
> 
> I like "expected" and "actual" better than "left" and "right", not
> sure how it's possible to have that in a way consistent with the shell
> tests though.

I also prefer expected/actual, but I don't think it's possible where we
accept arbitrary operators, and I don't want to plumb a flag through to
specify whether to display left/right vs expected/actual.


> >      not ok 1 - x is greater than or equal to three
> >
> > There are a number of check functions implemented so far. check() checks
> > a boolean condition, check_int(), check_uint() and check_char() take two
> > values to compare and a comparison operator. check_str() will check if
> > two strings are equal. Custom checks are simple to implement as shown in
> > the comments above test_assert() in test-lib.h.
> 
> Yeah, nice.
> 
> > Tests can be skipped with test_skip() which can be supplied with a
> > reason for skipping which it will print. Tests can print diagnostic
> > messages with test_msg().  Checks that are known to fail can be wrapped
> > in TEST_TODO().
> 
> Maybe TEST_TOFIX() would be a bit more clear, but "TODO" is something
> that is more likely to be searched for than "TOFIX", so Ok.
> 
> > There are a couple of example test programs included in this
> > patch. t-basic.c implements some self-tests and demonstrates the
> > diagnostic output for failing test. The output of this program is
> > checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
> > unit tests for strbuf.c
> >
> > The unit tests will be built as part of the default "make all" target,
> > to avoid bitrot. If you wish to build just the unit tests, you can run
> > "make build-unit-tests". To run the tests, you can use "make unit-tests"
> > or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf".
> 
> Nice!
> 
> > +unit-tests-prove:
> > +       @echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
> 
> Nice, but DEFAULT_TEST_TARGET=prove isn't used. So not sure how
> important or relevant the 'prove' related sections are in the
> Documentation/technical/unit-tests.txt file introduced by the previous
> patch.

The "unit-tests" target runs DEFAULT_UNIT_TEST_TARGET, which can be
overridden to "unit-tests-prove".


> > +int test_assert(const char *location, const char *check, int ok)
> > +{
> > +       assert(ctx.running);
> > +
> > +       if (ctx.result == RESULT_SKIP) {
> > +               test_msg("skipping check '%s' at %s", check, location);
> > +               return 1;
> > +       } else if (!ctx.todo) {
> 
> I think it would be a bit clearer without the "else" above and with
> the "if (!ctx.todo) {" starting on a new line.

Fixed in v9.


> > +               if (ok) {
> > +                       test_pass();
> > +               } else {
> > +                       test_msg("check \"%s\" failed at %s", check, location);
> > +                       test_fail();
> > +               }
> > +       }
> > +
> > +       return !!ok;
> > +}
> 
> Otherwise it looks good to me.

Thanks for the review!

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 0/3] Add unit test framework and project plan
  2023-10-16 10:07     ` [PATCH v8 0/3] Add unit test framework and project plan phillip.wood123
@ 2023-11-01 23:09       ` Josh Steadmon
  0 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 23:09 UTC (permalink / raw)
  To: phillip.wood; +Cc: git, linusa, calvinwan, gitster, rsbecker

On 2023.10.16 11:07, phillip.wood123@gmail.com wrote:
> Hi Josh
> 
> Thanks for the update
> 
> On 09/10/2023 23:21, Josh Steadmon wrote:
> > In addition to reviewing the patches in this series, reviewers can help
> > this series progress by chiming in on these remaining TODOs:
> > - Figure out if we should split t-basic.c into multiple meta-tests, to
> >    avoid merge conflicts and changes to expected text in
> >    t0080-unit-test-output.sh.
> 
> I think it depends on how many new tests we think we're going to want to add
> here. I can see us adding a few more check_* macros (comparing object ids
> and arrays of bytes spring to mind) and wanting to test them here, but
> (perhaps naïvely) I don't expect huge amounts of churn here.

This is my feeling as well.


> > - Figure out if we should de-duplicate assertions in t-strbuf.c at the
> >    cost of making tests less self-contained and diagnostic output less
> >    helpful.
> 
> In principle we could pass the location information along to any helper
> function, I'm not sure how easy that is at the moment. We can get reasonable
> error messages by using the check*() macros in the helper and wrapping the
> call to the helper with check() as well. For example
> 
> static int assert_sane_strbuf(struct strbuf *buf)
> {
> 	/* Initialized strbufs should always have a non-NULL buffer */
> 	if (!check(!!buf->buf))
> 		return 0;
> 	/* Buffers should always be NUL-terminated */
> 	if (!check_char(buf->buf[buf->len], ==, '\0'))
> 		return 0;
> 	/*
> 	 * Freshly-initialized strbufs may not have a dynamically allocated
> 	 * buffer
> 	 */
> 	if (buf->len == 0 && buf->alloc == 0)
> 		return 1;
> 	/* alloc must be at least one byte larger than len */
> 	return check_uint(buf->len, <, buf->alloc);
> }
> 
> and in the test function call it as
> 
> 	check(assert_sane_strbuf(buf));
> 
> which gives error messages like
> 
> # check "buf->len < buf->alloc" failed at t/unit-tests/t-strbuf.c:43
> #    left: 5
> #   right: 0
> # check "assert_sane_strbuf(&buf)" failed at t/unit-tests/t-strbuf.c:60
> 
> So we can see where assert_sane_strbuf() was called and which assertion in
> assert_sane_strbuf() failed.

I like this approach. We'll need to document unit-test best practices,
but I think now that I'll want to do that in a separate series after
this one lands.


> > - Figure out if we should collect unit tests statistics similar to the
> >    "counts" files for shell tests
> 
> Unless someone has an immediate need for that I'd be tempted to leave it
> wait until someone requests that data.
> 
> > - Decide if it's OK to wait on sharding unit tests across "sliced" CI
> >    instances
> 
> Hopefully the unit tests will run fast enough that we don't need to worry
> about that in the early stages.
> 
> > - Provide guidelines for writing new unit tests
> 
> This is not a comprehensive list but we should recommend that
> 
> - tests avoid leaking resources so the leak sanitizer see if the code
>   being tested has a resource leak.
> 
> - tests check that pointers are not NULL before deferencing them to
>   avoid the whole program being taken down with SIGSEGV.
> 
> - tests are written with easy debugging in mind - i.e. good diagnostic
>   messages. Hopefully the check* macros make that easy to do.

Thanks for the suggestions! I will make sure these make it into the best
practices doc.


> > Changes in v8:
> > - Flipped return values for TEST, TEST_TODO, and check_* macros &
> >    functions. This makes it easier to reason about control flow for
> >    patterns like:
> >      if (check(some_condition)) { ... } > - Moved unit test binaries to t/unit-tests/bin to simplify .gitignore
> >    patterns.
> 
> Thanks for the updates to the test library, the range diff looks good to me.
> 
> > - Removed testing of some strbuf implementation details in t-strbuf.c
> 
> I agree that makes sense. I think it would be good to update
> assert_sane_strbuf() to use the check* macros as suggest above.

Fixed in v9.

> Best Wishes
> 
> Phillip

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v9 0/3] Add unit test framework and project plan
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
                     ` (5 preceding siblings ...)
  2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
@ 2023-11-01 23:31   ` Josh Steadmon
  2023-11-01 23:31     ` [PATCH v9 1/3] unit tests: Add a project plan document Josh Steadmon
                       ` (2 more replies)
  2023-11-09 18:50   ` [PATCH v10 0/3] Add unit test framework and project plan Josh Steadmon
  7 siblings, 3 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 23:31 UTC (permalink / raw)
  To: git
  Cc: gitster, phillip.wood123, rsbecker, oswald.buddenhagen,
	christian.couder

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This series begins with a project document covering our goals for adding
unit tests and a discussion of alternative frameworks considered, as
well as the features used to evaluate them. A rendered preview of this
doc can be found at [2]. It also adds Phillip Wood's TAP implemenation
(with some slightly re-worked Makefile rules) and a sample strbuf unit
test. Finally, we modify the configs for GitHub and Cirrus CI to run the
unit tests. Sample runs showing successful CI runs can be found at [3],
[4], and [5].

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc
[3] https://github.com/steadmon/git/actions/runs/5884659246/job/15959781385#step:4:1803
[4] https://github.com/steadmon/git/actions/runs/5884659246/job/15959938401#step:5:186
[5] https://cirrus-ci.com/task/6126304366428160 (unrelated tests failed,
    but note that t-strbuf ran successfully)

Changes in v9:
- Included some asciidoc cleanups suggested by Oswald Buddenhagen.
- Applied a style fixup that Coccinelle complained about.
- Applied some NULL-safety fixups.
- Used check_*() more widely in t-strbuf helper functions

Changes in v8:
- Flipped return values for TEST, TEST_TODO, and check_* macros &
  functions. This makes it easier to reason about control flow for
  patterns like:
    if (check(some_condition)) { ... }
- Moved unit test binaries to t/unit-tests/bin to simplify .gitignore
  patterns.
- Removed testing of some strbuf implementation details in t-strbuf.c

Changes in v7:
- Fix corrupt diff in patch #2, sorry for the noise.

Changes in v6:
- Officially recommend using Phillip Wood's TAP framework
- Add an example strbuf unit test using the TAP framework as well as
  Makefile integration
- Run unit tests in CI

Changes in v5:
- Add comparison point "License".
- Discuss feature priorities
- Drop frameworks:
  - Incompatible licenses: libtap, cmocka
  - Missing source: MyTAP
  - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
- Drop comparison point "Coverage reports": this can generally be
  handled by tools such as `gcov` regardless of the framework used.
- Drop comparison point "Inline tests": there didn't seem to be
  strong interest from reviewers for this feature.
- Drop comparison point "Scheduling / re-running": this was not
  supported by any of the main contenders, and is generally better
  handled by the harness rather than framework.
- Drop comparison point "Lazy test planning": this was supported by
  all frameworks that provide TAP output.

Changes in v4:
- Add link anchors for the framework comparison dimensions
- Explain "Partial" results for each dimension
- Use consistent dimension names in the section headers and comparison
  tables
- Add "Project KLOC", "Adoption", and "Inline tests" dimensions
- Fill in a few of the missing entries in the comparison table

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com


Josh Steadmon (2):
  unit tests: Add a project plan document
  ci: run unit tests in CI

Phillip Wood (1):
  unit tests: add TAP unit test framework

 .cirrus.yml                            |   2 +-
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 240 ++++++++++++++++++
 Makefile                               |  28 ++-
 ci/run-build-and-tests.sh              |   2 +
 ci/run-test-slice.sh                   |   5 +
 t/Makefile                             |  15 +-
 t/t0080-unit-test-output.sh            |  58 +++++
 t/unit-tests/.gitignore                |   1 +
 t/unit-tests/t-basic.c                 |  95 +++++++
 t/unit-tests/t-strbuf.c                | 120 +++++++++
 t/unit-tests/test-lib.c                | 329 +++++++++++++++++++++++++
 t/unit-tests/test-lib.h                | 149 +++++++++++
 13 files changed, 1040 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/technical/unit-tests.txt
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

Range-diff against v8:
1:  81c5148a12 ! 1:  f706ba9b68 unit tests: Add a project plan document
    @@ Commit message
         rare error conditions). Describe what we hope to accomplish by
         implementing unit tests, and explain some open questions and milestones.
         Discuss desired features for test frameworks/harnesses, and provide a
    -    preliminary comparison of several different frameworks.
    +    comparison of several different frameworks. Finally, document our
    +    rationale for implementing a custom framework.
     
         Co-authored-by: Calvin Wan <calvinwan@google.com>
    @@ Documentation/technical/unit-tests.txt (new)
     +can be made to work with a harness that we can choose later.
     +
     +
    -+== Choosing a framework
    ++== Summary
     +
    -+We believe the best option is to implement a custom TAP framework for the Git
    -+project. We use a version of the framework originally proposed in
    ++We believe the best way forward is to implement a custom TAP framework for the
    ++Git project. We use a version of the framework originally proposed in
     +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
     +
    ++See the <<framework-selection,Framework Selection>> section below for the
    ++rationale behind this decision.
    ++
     +
     +== Choosing a test harness
     +
    @@ Documentation/technical/unit-tests.txt (new)
     +configured with DEFAULT_UNIT_TEST_TARGET=prove.
     +
     +
    ++[[framework-selection]]
     +== Framework selection
     +
     +There are a variety of features we can use to rank the candidate frameworks, and
    @@ Documentation/technical/unit-tests.txt (new)
     +
     +=== Comparison
     +
    -+[format="csv",options="header",width="33%"]
    ++:true: [lime-background]#True#
    ++:false: [red-background]#False#
    ++:partial: [yellow-background]#Partial#
    ++
    ++:gpl: [lime-background]#GPL v2#
    ++:isc: [lime-background]#ISC#
    ++:mit: [lime-background]#MIT#
    ++:expat: [lime-background]#Expat#
    ++:lgpl: [lime-background]#LGPL v2.1#
    ++
    ++:custom-impl: https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.]
    ++:greatest: https://github.com/silentbicycle/greatest[Greatest]
    ++:criterion: https://github.com/Snaipe/Criterion[Criterion]
    ++:c-tap: https://github.com/rra/c-tap-harness/[C TAP]
    ++:check: https://libcheck.github.io/check/[Check]
    ++
    ++[format="csv",options="header",width="33%",subs="specialcharacters,attributes,quotes,macros"]
     +|=====
     +Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
    -+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#GPL v2#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,1,0
    -+https://github.com/silentbicycle/greatest[Greatest],[lime-background]#ISC#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,3,1400
    -+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#MIT#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,19,1800
    -+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#Expat#,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[lime-background]#True#,[red-background]#False#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,4,33
    -+https://libcheck.github.io/check/[Check],[lime-background]#LGPL v2.1#,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[red-background]#False#,[red-background]#False#,[red-background]#False#,[lime-background]#True#,17,973
    ++{custom-impl},{gpl},{true},{true},{true},{true},{true},{true},{false},{false},{false},1,0
    ++{greatest},{isc},{true},{partial},{true},{partial},{true},{true},{false},{false},{false},3,1400
    ++{criterion},{mit},{false},{partial},{true},{true},{true},{true},{true},{false},{true},19,1800
    ++{c-tap},{expat},{true},{partial},{partial},{true},{false},{true},{false},{false},{false},4,33
    ++{check},{lgpl},{false},{partial},{true},{true},{true},{false},{false},{false},{true},17,973
     +|=====
     +
     +=== Additional framework candidates
2:  00d3c95a81 ! 2:  8b831f4937 unit tests: add TAP unit test framework
    @@ t/unit-tests/t-strbuf.c (new)
     +static int assert_sane_strbuf(struct strbuf *buf)
     +{
     +	/* Initialized strbufs should always have a non-NULL buffer */
    -+	if (buf->buf == NULL)
    ++	if (!check(!!buf->buf))
     +		return 0;
     +	/* Buffers should always be NUL-terminated */
    -+	if (buf->buf[buf->len] != '\0')
    ++	if (!check_char(buf->buf[buf->len], ==, '\0'))
     +		return 0;
     +	/*
     +	 * Freshly-initialized strbufs may not have a dynamically allocated
    @@ t/unit-tests/t-strbuf.c (new)
     +	if (buf->len == 0 && buf->alloc == 0)
     +		return 1;
     +	/* alloc must be at least one byte larger than len */
    -+	return buf->len + 1 <= buf->alloc;
    ++	return check_uint(buf->len, <, buf->alloc);
     +}
     +
     +static void t_static_init(void)
    @@ t/unit-tests/test-lib.h (new)
     +
     +/*
     + * Test checks are built around test_assert(). checks return 1 on
    -+ * success, 0 on failure. If any check fails then the test will
    -+ * fail. To create a custom check define a function that wraps
    -+ * test_assert() and a macro to wrap that function. For example:
    ++ * success, 0 on failure. If any check fails then the test will fail. To
    ++ * create a custom check define a function that wraps test_assert() and
    ++ * a macro to wrap that function to provide a source location and
    ++ * stringified arguments. Custom checks that take pointer arguments
    ++ * should be careful to check that they are non-NULL before
    ++ * dereferencing them. For example:
     + *
     + *  static int check_oid_loc(const char *loc, const char *check,
     + *			     struct object_id *a, struct object_id *b)
     + *  {
    -+ *	    int res = test_assert(loc, check, oideq(a, b));
    ++ *	    int res = test_assert(loc, check, a && b && oideq(a, b));
     + *
    -+ *	    if (res) {
    -+ *		    test_msg("   left: %s", oid_to_hex(a);
    -+ *		    test_msg("  right: %s", oid_to_hex(a);
    ++ *	    if (!res) {
    ++ *		    test_msg("   left: %s", a ? oid_to_hex(a) : "NULL";
    ++ *		    test_msg("  right: %s", b ? oid_to_hex(a) : "NULL";
     + *
     + *	    }
     + *	    return res;
    @@ t/unit-tests/test-lib.h (new)
     +#define check_int(a, op, b)						\
     +	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
     +	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
    -+		       test__tmp[0].i op test__tmp[1].i, a, b))
    ++		       test__tmp[0].i op test__tmp[1].i,		\
    ++		       test__tmp[0].i, test__tmp[1].i))
     +int check_int_loc(const char *loc, const char *check, int ok,
     +		  intmax_t a, intmax_t b);
     +
    @@ t/unit-tests/test-lib.h (new)
     +#define check_uint(a, op, b)						\
     +	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
     +	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
    -+			test__tmp[0].u op test__tmp[1].u, a, b))
    ++			test__tmp[0].u op test__tmp[1].u,		\
    ++			test__tmp[0].u, test__tmp[1].u))
     +int check_uint_loc(const char *loc, const char *check, int ok,
     +		   uintmax_t a, uintmax_t b);
     +
    @@ t/unit-tests/test-lib.h (new)
     +#define check_char(a, op, b)						\
     +	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
     +	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
    -+			test__tmp[0].c op test__tmp[1].c, a, b))
    ++			test__tmp[0].c op test__tmp[1].c,		\
    ++			test__tmp[0].c, test__tmp[1].c))
     +int check_char_loc(const char *loc, const char *check, int ok,
     +		   char a, char b);
     +
3:  aa1dfa4892 = 3:  08d27bb5f9 ci: run unit tests in CI

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v9 1/3] unit tests: Add a project plan document
  2023-11-01 23:31   ` [PATCH v9 " Josh Steadmon
@ 2023-11-01 23:31     ` Josh Steadmon
  2023-11-01 23:31     ` [PATCH v9 2/3] unit tests: add TAP unit test framework Josh Steadmon
  2023-11-01 23:31     ` [PATCH v9 3/3] ci: run unit tests in CI Josh Steadmon
  2 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 23:31 UTC (permalink / raw)
  To: git
  Cc: gitster, phillip.wood123, rsbecker, oswald.buddenhagen,
	christian.couder

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
comparison of several different frameworks. Finally, document our
rationale for implementing a custom framework.

Co-authored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 240 +++++++++++++++++++++++++
 2 files changed, 241 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..206037ffb1
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,240 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+For now, we will evaluate projects solely on their framework features. Since we
+are relying on having TAP output (see below), we can assume that any framework
+can be made to work with a harness that we can choose later.
+
+
+== Summary
+
+We believe the best way forward is to implement a custom TAP framework for the
+Git project. We use a version of the framework originally proposed in
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
+
+See the <<framework-selection,Framework Selection>> section below for the
+rationale behind this decision.
+
+
+== Choosing a test harness
+
+During upstream discussion, it was occasionally noted that `prove` provides many
+convenient features, such as scheduling slower tests first, or re-running
+previously failed tests.
+
+While we already support the use of `prove` as a test harness for the shell
+tests, it is not strictly required. The t/Makefile allows running shell tests
+directly (though with interleaved output if parallelism is enabled). Git
+developers who wish to use `prove` as a more advanced harness can do so by
+setting DEFAULT_TEST_TARGET=prove in their config.mak.
+
+We will follow a similar approach for unit tests: by default the test
+executables will be run directly from the t/Makefile, but `prove` can be
+configured with DEFAULT_UNIT_TEST_TARGET=prove.
+
+
+[[framework-selection]]
+== Framework selection
+
+There are a variety of features we can use to rank the candidate frameworks, and
+those features have different priorities:
+
+* Critical features: we probably won't consider a framework without these
+** Can we legally / easily use the project?
+*** <<license,License>>
+*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
+*** <<maintainable-extensible,Maintainable / extensible>>
+*** <<major-platform-support,Major platform support>>
+** Does the project support our bare-minimum needs?
+*** <<tap-support,TAP support>>
+*** <<diagnostic-output,Diagnostic output>>
+*** <<runtime-skippable-tests,Runtime-skippable tests>>
+* Nice-to-have features:
+** <<parallel-execution,Parallel execution>>
+** <<mock-support,Mock support>>
+** <<signal-error-handling,Signal & error-handling>>
+* Tie-breaker stats
+** <<project-kloc,Project KLOC>>
+** <<adoption,Adoption>>
+
+[[license]]
+=== License
+
+We must be able to legally use the framework in connection with Git. As Git is
+licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
+projects.
+
+[[vendorable-or-ubiquitous]]
+=== Vendorable or ubiquitous
+
+We want to avoid forcing Git developers to install new tools just to run unit
+tests. Any prospective frameworks and harnesses must either be vendorable
+(meaning, we can copy their source directly into Git's repository), or so
+ubiquitous that it is reasonable to expect that most developers will have the
+tools installed already.
+
+[[maintainable-extensible]]
+=== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+=== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[tap-support]]
+=== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+=== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[runtime-skippable-tests]]
+=== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[parallel-execution]]
+=== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. We assume that executable-level
+parallelism can be handled by the test harness.
+
+[[mock-support]]
+=== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+=== Signal & error handling
+
+The test framework should fail gracefully when test cases are themselves buggy
+or when they are interrupted by signals during runtime.
+
+[[project-kloc]]
+=== Project KLOC
+
+The size of the project, in thousands of lines of code as measured by
+https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
+1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+=== Adoption
+
+As a tie-breaker, we prefer a more widely-used project. We use the number of
+GitHub / GitLab stars to estimate this.
+
+
+=== Comparison
+
+:true: [lime-background]#True#
+:false: [red-background]#False#
+:partial: [yellow-background]#Partial#
+
+:gpl: [lime-background]#GPL v2#
+:isc: [lime-background]#ISC#
+:mit: [lime-background]#MIT#
+:expat: [lime-background]#Expat#
+:lgpl: [lime-background]#LGPL v2.1#
+
+:custom-impl: https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.]
+:greatest: https://github.com/silentbicycle/greatest[Greatest]
+:criterion: https://github.com/Snaipe/Criterion[Criterion]
+:c-tap: https://github.com/rra/c-tap-harness/[C TAP]
+:check: https://libcheck.github.io/check/[Check]
+
+[format="csv",options="header",width="33%",subs="specialcharacters,attributes,quotes,macros"]
+|=====
+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
+{custom-impl},{gpl},{true},{true},{true},{true},{true},{true},{false},{false},{false},1,0
+{greatest},{isc},{true},{partial},{true},{partial},{true},{true},{false},{false},{false},3,1400
+{criterion},{mit},{false},{partial},{true},{true},{true},{true},{true},{false},{true},19,1800
+{c-tap},{expat},{true},{partial},{partial},{true},{false},{true},{false},{false},{false},4,33
+{check},{lgpl},{false},{partial},{true},{true},{true},{false},{false},{false},{true},17,973
+|=====
+
+=== Additional framework candidates
+
+Several suggested frameworks have been eliminated from consideration:
+
+* Incompatible licenses:
+** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
+** https://cmocka.org/[cmocka] (Apache 2.0)
+* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
+* No TAP support:
+** https://nemequ.github.io/munit/[µnit]
+** https://github.com/google/cmockery[cmockery]
+** https://github.com/lpabon/cmockery2[cmockery2]
+** https://github.com/ThrowTheSwitch/Unity[Unity]
+** https://github.com/siu/minunit[minunit]
+** https://cunit.sourceforge.net/[CUnit]
+
+
+== Milestones
+
+* Add useful tests of library-like code
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v9 2/3] unit tests: add TAP unit test framework
  2023-11-01 23:31   ` [PATCH v9 " Josh Steadmon
  2023-11-01 23:31     ` [PATCH v9 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-11-01 23:31     ` Josh Steadmon
  2023-11-03 21:54       ` Christian Couder
  2023-11-01 23:31     ` [PATCH v9 3/3] ci: run unit tests in CI Josh Steadmon
  2 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 23:31 UTC (permalink / raw)
  To: git
  Cc: gitster, phillip.wood123, rsbecker, oswald.buddenhagen,
	christian.couder

From: Phillip Wood <phillip.wood@dunelm.org.uk>

This patch contains an implementation for writing unit tests with TAP
output. Each test is a function that contains one or more checks. The
test is run with the TEST() macro and if any of the checks fail then the
test will fail. A complete program that tests STRBUF_INIT would look
like

     #include "test-lib.h"
     #include "strbuf.h"

     static void t_static_init(void)
     {
             struct strbuf buf = STRBUF_INIT;

             check_uint(buf.len, ==, 0);
             check_uint(buf.alloc, ==, 0);
             check_char(buf.buf[0], ==, '\0');
     }

     int main(void)
     {
             TEST(t_static_init(), "static initialization works);

             return test_done();
     }

The output of this program would be

     ok 1 - static initialization works
     1..1

If any of the checks in a test fail then they print a diagnostic message
to aid debugging and the test will be reported as failing. For example a
failing integer check would look like

     # check "x >= 3" failed at my-test.c:102
     #    left: 2
     #   right: 3
     not ok 1 - x is greater than or equal to three

There are a number of check functions implemented so far. check() checks
a boolean condition, check_int(), check_uint() and check_char() take two
values to compare and a comparison operator. check_str() will check if
two strings are equal. Custom checks are simple to implement as shown in
the comments above test_assert() in test-lib.h.

Tests can be skipped with test_skip() which can be supplied with a
reason for skipping which it will print. Tests can print diagnostic
messages with test_msg().  Checks that are known to fail can be wrapped
in TEST_TODO().

There are a couple of example test programs included in this
patch. t-basic.c implements some self-tests and demonstrates the
diagnostic output for failing test. The output of this program is
checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
unit tests for strbuf.c

The unit tests will be built as part of the default "make all" target,
to avoid bitrot. If you wish to build just the unit tests, you can run
"make build-unit-tests". To run the tests, you can use "make unit-tests"
or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf".

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Makefile                    |  28 ++-
 t/Makefile                  |  15 +-
 t/t0080-unit-test-output.sh |  58 +++++++
 t/unit-tests/.gitignore     |   1 +
 t/unit-tests/t-basic.c      |  95 +++++++++++
 t/unit-tests/t-strbuf.c     | 120 +++++++++++++
 t/unit-tests/test-lib.c     | 329 ++++++++++++++++++++++++++++++++++++
 t/unit-tests/test-lib.h     | 149 ++++++++++++++++
 8 files changed, 791 insertions(+), 4 deletions(-)
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

diff --git a/Makefile b/Makefile
index e440728c24..18c13f06c0 100644
--- a/Makefile
+++ b/Makefile
@@ -682,6 +682,9 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests
+UNIT_TEST_BIN = $(UNIT_TEST_DIR)/bin
 
 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1331,6 +1334,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
 THIRD_PARTY_SOURCES += sha1collisiondetection/%
 THIRD_PARTY_SOURCES += sha1dc/%
 
+UNIT_TEST_PROGRAMS += t-basic
+UNIT_TEST_PROGRAMS += t-strbuf
+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_BIN)/%$X,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
+
 # xdiff and reftable libs may in turn depend on what is in libgit.a
 GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
 EXTLIBS =
@@ -2672,6 +2681,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(UNIT_TEST_OBJS)
 
 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3167,7 +3177,7 @@ endif
 
 test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
 
-all:: $(TEST_PROGRAMS) $(test_bindir_programs)
+all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
 
 bin-wrappers/%: wrap-for-bin.sh
 	$(call mkdir_p_parent_template)
@@ -3592,7 +3602,7 @@ endif
 
 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
-		$(MOFILES)
+		$(UNIT_TEST_PROGS) $(MOFILES)
 	$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
 		SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
 	test -n "$(ARTIFACTS_DIRECTORY)"
@@ -3653,7 +3663,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3831,3 +3841,15 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
 
 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_BIN):
+	@mkdir -p $(UNIT_TEST_BIN)
+
+$(UNIT_TEST_PROGS): $(UNIT_TEST_BIN)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS $(UNIT_TEST_BIN)
+	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGS)
+unit-tests: $(UNIT_TEST_PROGS)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..75d9330437 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -17,6 +17,7 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+DEFAULT_UNIT_TEST_TARGET ?= unit-tests-raw
 TEST_LINT ?= test-lint
 
 ifdef TEST_OUTPUT_DIRECTORY
@@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(filter-out unit-tests/bin/t-basic%,$(wildcard unit-tests/bin/t-*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +67,17 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
 
+$(UNIT_TESTS):
+	@echo "*** $@ ***"; $@
+
+.PHONY: unit-tests unit-tests-raw unit-tests-prove
+unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
+
+unit-tests-raw: $(UNIT_TESTS)
+
+unit-tests-prove:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
@@ -149,4 +162,4 @@ perf:
 	$(MAKE) -C perf/ all
 
 .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
-	check-chainlint clean-chainlint test-chainlint
+	check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
new file mode 100755
index 0000000000..961b54b06c
--- /dev/null
+++ b/t/t0080-unit-test-output.sh
@@ -0,0 +1,58 @@
+#!/bin/sh
+
+test_description='Test the output of the unit test framework'
+
+. ./test-lib.sh
+
+test_expect_success 'TAP output from unit tests' '
+	cat >expect <<-EOF &&
+	ok 1 - passing test
+	ok 2 - passing test and assertion return 1
+	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
+	#    left: 1
+	#   right: 2
+	not ok 3 - failing test
+	ok 4 - failing test and assertion return 0
+	not ok 5 - passing TEST_TODO() # TODO
+	ok 6 - passing TEST_TODO() returns 1
+	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
+	not ok 7 - failing TEST_TODO()
+	ok 8 - failing TEST_TODO() returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:30
+	# skipping test - missing prerequisite
+	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
+	ok 9 - test_skip() # SKIP
+	ok 10 - skipped test returns 1
+	# skipping test - missing prerequisite
+	ok 11 - test_skip() inside TEST_TODO() # SKIP
+	ok 12 - test_skip() inside TEST_TODO() returns 1
+	# check "0" failed at t/unit-tests/t-basic.c:48
+	not ok 13 - TEST_TODO() after failing check
+	ok 14 - TEST_TODO() after failing check returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:56
+	not ok 15 - failing check after TEST_TODO()
+	ok 16 - failing check after TEST_TODO() returns 0
+	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
+	#    left: "\011hello\\\\"
+	#   right: "there\"\012"
+	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
+	#    left: "NULL"
+	#   right: NULL
+	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
+	#    left: ${SQ}a${SQ}
+	#   right: ${SQ}\012${SQ}
+	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
+	#    left: ${SQ}\\\\${SQ}
+	#   right: ${SQ}\\${SQ}${SQ}
+	not ok 17 - messages from failing string and char comparison
+	# BUG: test has no checks at t/unit-tests/t-basic.c:91
+	not ok 18 - test with no checks
+	ok 19 - test with no checks returns 0
+	1..19
+	EOF
+
+	! "$GIT_BUILD_DIR"/t/unit-tests/bin/t-basic >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
new file mode 100644
index 0000000000..5e56e040ec
--- /dev/null
+++ b/t/unit-tests/.gitignore
@@ -0,0 +1 @@
+/bin
diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
new file mode 100644
index 0000000000..fda1ae59a6
--- /dev/null
+++ b/t/unit-tests/t-basic.c
@@ -0,0 +1,95 @@
+#include "test-lib.h"
+
+/*
+ * The purpose of this "unit test" is to verify a few invariants of the unit
+ * test framework itself, as well as to provide examples of output from actually
+ * failing tests. As such, it is intended that this test fails, and thus it
+ * should not be run as part of `make unit-tests`. Instead, we verify it behaves
+ * as expected in the integration test t0080-unit-test-output.sh
+ */
+
+/* Used to store the return value of check_int(). */
+static int check_res;
+
+/* Used to store the return value of TEST(). */
+static int test_res;
+
+static void t_res(int expect)
+{
+	check_int(check_res, ==, expect);
+	check_int(test_res, ==, expect);
+}
+
+static void t_todo(int x)
+{
+	check_res = TEST_TODO(check(x));
+}
+
+static void t_skip(void)
+{
+	check(0);
+	test_skip("missing prerequisite");
+	check(1);
+}
+
+static int do_skip(void)
+{
+	test_skip("missing prerequisite");
+	return 1;
+}
+
+static void t_skip_todo(void)
+{
+	check_res = TEST_TODO(do_skip());
+}
+
+static void t_todo_after_fail(void)
+{
+	check(0);
+	TEST_TODO(check(0));
+}
+
+static void t_fail_after_todo(void)
+{
+	check(1);
+	TEST_TODO(check(0));
+	check(0);
+}
+
+static void t_messages(void)
+{
+	check_str("\thello\\", "there\"\n");
+	check_str("NULL", NULL);
+	check_char('a', ==, '\n');
+	check_char('\\', ==, '\'');
+}
+
+static void t_empty(void)
+{
+	; /* empty */
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
+	TEST(t_res(1), "passing test and assertion return 1");
+	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
+	TEST(t_res(0), "failing test and assertion return 0");
+	test_res = TEST(t_todo(0), "passing TEST_TODO()");
+	TEST(t_res(1), "passing TEST_TODO() returns 1");
+	test_res = TEST(t_todo(1), "failing TEST_TODO()");
+	TEST(t_res(0), "failing TEST_TODO() returns 0");
+	test_res = TEST(t_skip(), "test_skip()");
+	TEST(check_int(test_res, ==, 1), "skipped test returns 1");
+	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
+	TEST(t_res(1), "test_skip() inside TEST_TODO() returns 1");
+	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
+	TEST(check_int(test_res, ==, 0), "TEST_TODO() after failing check returns 0");
+	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
+	TEST(check_int(test_res, ==, 0), "failing check after TEST_TODO() returns 0");
+	TEST(t_messages(), "messages from failing string and char comparison");
+	test_res = TEST(t_empty(), "test with no checks");
+	TEST(check_int(test_res, ==, 0), "test with no checks returns 0");
+
+	return test_done();
+}
diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
new file mode 100644
index 0000000000..de434a4441
--- /dev/null
+++ b/t/unit-tests/t-strbuf.c
@@ -0,0 +1,120 @@
+#include "test-lib.h"
+#include "strbuf.h"
+
+/* wrapper that supplies tests with an empty, initialized strbuf */
+static void setup(void (*f)(struct strbuf*, void*), void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+}
+
+/* wrapper that supplies tests with a populated, initialized strbuf */
+static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	strbuf_addstr(&buf, init_str);
+	check_uint(buf.len, ==, strlen(init_str));
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+}
+
+static int assert_sane_strbuf(struct strbuf *buf)
+{
+	/* Initialized strbufs should always have a non-NULL buffer */
+	if (!check(!!buf->buf))
+		return 0;
+	/* Buffers should always be NUL-terminated */
+	if (!check_char(buf->buf[buf->len], ==, '\0'))
+		return 0;
+	/*
+	 * Freshly-initialized strbufs may not have a dynamically allocated
+	 * buffer
+	 */
+	if (buf->len == 0 && buf->alloc == 0)
+		return 1;
+	/* alloc must be at least one byte larger than len */
+	return check_uint(buf->len, <, buf->alloc);
+}
+
+static void t_static_init(void)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_dynamic_init(void)
+{
+	struct strbuf buf;
+
+	strbuf_init(&buf, 1024);
+	check(assert_sane_strbuf(&buf));
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, >=, 1024);
+	check_char(buf.buf[0], ==, '\0');
+	strbuf_release(&buf);
+}
+
+static void t_addch(struct strbuf *buf, void *data)
+{
+	const char *p_ch = data;
+	const char ch = *p_ch;
+	size_t orig_alloc = buf->alloc;
+	size_t orig_len = buf->len;
+
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	strbuf_addch(buf, ch);
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	if (!(check_uint(buf->len, ==, orig_len + 1) &&
+	      check_uint(buf->alloc, >=, orig_alloc)))
+		return; /* avoid de-referencing buf->buf */
+	check_char(buf->buf[buf->len - 1], ==, ch);
+	check_char(buf->buf[buf->len], ==, '\0');
+}
+
+static void t_addstr(struct strbuf *buf, void *data)
+{
+	const char *text = data;
+	size_t len = strlen(text);
+	size_t orig_alloc = buf->alloc;
+	size_t orig_len = buf->len;
+
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	strbuf_addstr(buf, text);
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	if (!(check_uint(buf->len, ==, orig_len + len) &&
+	      check_uint(buf->alloc, >=, orig_alloc) &&
+	      check_uint(buf->alloc, >, orig_len + len) &&
+	      check_char(buf->buf[orig_len + len], ==, '\0')))
+	    return;
+	check_str(buf->buf + orig_len, text);
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	if (!TEST(t_static_init(), "static initialization works"))
+		test_skip_all("STRBUF_INIT is broken");
+	TEST(t_dynamic_init(), "dynamic initialization works");
+	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
+	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
+	TEST(setup_populated(t_addch, "initial value", "a"),
+	     "strbuf_addch appends to initial value");
+	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
+	TEST(setup_populated(t_addstr, "initial value", "hello there"),
+	     "strbuf_addstr appends string to initial value");
+
+	return test_done();
+}
diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
new file mode 100644
index 0000000000..b20f543121
--- /dev/null
+++ b/t/unit-tests/test-lib.c
@@ -0,0 +1,329 @@
+#include "test-lib.h"
+
+enum result {
+	RESULT_NONE,
+	RESULT_FAILURE,
+	RESULT_SKIP,
+	RESULT_SUCCESS,
+	RESULT_TODO
+};
+
+static struct {
+	enum result result;
+	int count;
+	unsigned failed :1;
+	unsigned lazy_plan :1;
+	unsigned running :1;
+	unsigned skip_all :1;
+	unsigned todo :1;
+} ctx = {
+	.lazy_plan = 1,
+	.result = RESULT_NONE,
+};
+
+static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
+{
+	fflush(stderr);
+	if (prefix)
+		fprintf(stdout, "%s", prefix);
+	vprintf(format, ap); /* TODO: handle newlines */
+	putc('\n', stdout);
+	fflush(stdout);
+}
+
+void test_msg(const char *format, ...)
+{
+	va_list ap;
+
+	va_start(ap, format);
+	msg_with_prefix("# ", format, ap);
+	va_end(ap);
+}
+
+void test_plan(int count)
+{
+	assert(!ctx.running);
+
+	fflush(stderr);
+	printf("1..%d\n", count);
+	fflush(stdout);
+	ctx.lazy_plan = 0;
+}
+
+int test_done(void)
+{
+	assert(!ctx.running);
+
+	if (ctx.lazy_plan)
+		test_plan(ctx.count);
+
+	return ctx.failed;
+}
+
+void test_skip(const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	if (format)
+		msg_with_prefix("# skipping test - ", format, ap);
+	va_end(ap);
+}
+
+void test_skip_all(const char *format, ...)
+{
+	va_list ap;
+	const char *prefix;
+
+	if (!ctx.count && ctx.lazy_plan) {
+		/* We have not printed a test plan yet */
+		prefix = "1..0 # SKIP ";
+		ctx.lazy_plan = 0;
+	} else {
+		/* We have already printed a test plan */
+		prefix = "Bail out! # ";
+		ctx.failed = 1;
+	}
+	ctx.skip_all = 1;
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	msg_with_prefix(prefix, format, ap);
+	va_end(ap);
+}
+
+int test__run_begin(void)
+{
+	assert(!ctx.running);
+
+	ctx.count++;
+	ctx.result = RESULT_NONE;
+	ctx.running = 1;
+
+	return ctx.skip_all;
+}
+
+static void print_description(const char *format, va_list ap)
+{
+	if (format) {
+		fputs(" - ", stdout);
+		vprintf(format, ap);
+	}
+}
+
+int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	fflush(stderr);
+	va_start(ap, format);
+	if (!ctx.skip_all) {
+		switch (ctx.result) {
+		case RESULT_SUCCESS:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_FAILURE:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_TODO:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # TODO");
+			break;
+
+		case RESULT_SKIP:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # SKIP");
+			break;
+
+		case RESULT_NONE:
+			test_msg("BUG: test has no checks at %s", location);
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			ctx.result = RESULT_FAILURE;
+			break;
+		}
+	}
+	va_end(ap);
+	ctx.running = 0;
+	if (ctx.skip_all)
+		return 1;
+	putc('\n', stdout);
+	fflush(stdout);
+	ctx.failed |= ctx.result == RESULT_FAILURE;
+
+	return ctx.result != RESULT_FAILURE;
+}
+
+static void test_fail(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	ctx.result = RESULT_FAILURE;
+}
+
+static void test_pass(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result == RESULT_NONE)
+		ctx.result = RESULT_SUCCESS;
+}
+
+static void test_todo(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result != RESULT_FAILURE)
+		ctx.result = RESULT_TODO;
+}
+
+int test_assert(const char *location, const char *check, int ok)
+{
+	assert(ctx.running);
+
+	if (ctx.result == RESULT_SKIP) {
+		test_msg("skipping check '%s' at %s", check, location);
+		return 1;
+	} else if (!ctx.todo) {
+		if (ok) {
+			test_pass();
+		} else {
+			test_msg("check \"%s\" failed at %s", check, location);
+			test_fail();
+		}
+	}
+
+	return !!ok;
+}
+
+void test__todo_begin(void)
+{
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	ctx.todo = 1;
+}
+
+int test__todo_end(const char *location, const char *check, int res)
+{
+	assert(ctx.running);
+	assert(ctx.todo);
+
+	ctx.todo = 0;
+	if (ctx.result == RESULT_SKIP)
+		return 1;
+	if (res) {
+		test_msg("todo check '%s' succeeded at %s", check, location);
+		test_fail();
+	} else {
+		test_todo();
+	}
+
+	return !res;
+}
+
+int check_bool_loc(const char *loc, const char *check, int ok)
+{
+	return test_assert(loc, check, ok);
+}
+
+union test__tmp test__tmp[2];
+
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		test_msg("   left: %"PRIdMAX, a);
+		test_msg("  right: %"PRIdMAX, b);
+	}
+
+	return ret;
+}
+
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		test_msg("   left: %"PRIuMAX, a);
+		test_msg("  right: %"PRIuMAX, b);
+	}
+
+	return ret;
+}
+
+static void print_one_char(char ch, char quote)
+{
+	if ((unsigned char)ch < 0x20u || ch == 0x7f) {
+		/* TODO: improve handling of \a, \b, \f ... */
+		printf("\\%03o", (unsigned char)ch);
+	} else {
+		if (ch == '\\' || ch == quote)
+			putc('\\', stdout);
+		putc(ch, stdout);
+	}
+}
+
+static void print_char(const char *prefix, char ch)
+{
+	printf("# %s: '", prefix);
+	print_one_char(ch, '\'');
+	fputs("'\n", stdout);
+}
+
+int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		fflush(stderr);
+		print_char("   left", a);
+		print_char("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
+
+static void print_str(const char *prefix, const char *str)
+{
+	printf("# %s: ", prefix);
+	if (!str) {
+		fputs("NULL\n", stdout);
+	} else {
+		putc('"', stdout);
+		while (*str)
+			print_one_char(*str++, '"');
+		fputs("\"\n", stdout);
+	}
+}
+
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b)
+{
+	int ok = (!a && !b) || (a && b && !strcmp(a, b));
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		fflush(stderr);
+		print_str("   left", a);
+		print_str("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
new file mode 100644
index 0000000000..a8f07ae0b7
--- /dev/null
+++ b/t/unit-tests/test-lib.h
@@ -0,0 +1,149 @@
+#ifndef TEST_LIB_H
+#define TEST_LIB_H
+
+#include "git-compat-util.h"
+
+/*
+ * Run a test function, returns 1 if the test succeeds, 0 if it
+ * fails. If test_skip_all() has been called then the test will not be
+ * run. The description for each test should be unique. For example:
+ *
+ *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
+ */
+#define TEST(t, ...)					\
+	test__run_end(test__run_begin() ? 0 : (t, 1),	\
+		      TEST_LOCATION(),  __VA_ARGS__)
+
+/*
+ * Print a test plan, should be called before any tests. If the number
+ * of tests is not known in advance test_done() will automatically
+ * print a plan at the end of the test program.
+ */
+void test_plan(int count);
+
+/*
+ * test_done() must be called at the end of main(). It will print the
+ * plan if plan() was not called at the beginning of the test program
+ * and returns the exit code for the test program.
+ */
+int test_done(void);
+
+/* Skip the current test. */
+__attribute__((format (printf, 1, 2)))
+void test_skip(const char *format, ...);
+
+/* Skip all remaining tests. */
+__attribute__((format (printf, 1, 2)))
+void test_skip_all(const char *format, ...);
+
+/* Print a diagnostic message to stdout. */
+__attribute__((format (printf, 1, 2)))
+void test_msg(const char *format, ...);
+
+/*
+ * Test checks are built around test_assert(). checks return 1 on
+ * success, 0 on failure. If any check fails then the test will fail. To
+ * create a custom check define a function that wraps test_assert() and
+ * a macro to wrap that function to provide a source location and
+ * stringified arguments. Custom checks that take pointer arguments
+ * should be careful to check that they are non-NULL before
+ * dereferencing them. For example:
+ *
+ *  static int check_oid_loc(const char *loc, const char *check,
+ *			     struct object_id *a, struct object_id *b)
+ *  {
+ *	    int res = test_assert(loc, check, a && b && oideq(a, b));
+ *
+ *	    if (!res) {
+ *		    test_msg("   left: %s", a ? oid_to_hex(a) : "NULL";
+ *		    test_msg("  right: %s", b ? oid_to_hex(a) : "NULL";
+ *
+ *	    }
+ *	    return res;
+ *  }
+ *
+ *  #define check_oid(a, b) \
+ *	    check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
+ */
+int test_assert(const char *location, const char *check, int ok);
+
+/* Helper macro to pass the location to checks */
+#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
+
+/* Check a boolean condition. */
+#define check(x)				\
+	check_bool_loc(TEST_LOCATION(), #x, x)
+int check_bool_loc(const char *loc, const char *check, int ok);
+
+/*
+ * Compare two integers. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_int(a, op, b)						\
+	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
+	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+		       test__tmp[0].i op test__tmp[1].i,		\
+		       test__tmp[0].i, test__tmp[1].i))
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b);
+
+/*
+ * Compare two unsigned integers. Prints a message with the two values
+ * if the comparison fails. NB this is not thread safe.
+ */
+#define check_uint(a, op, b)						\
+	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
+	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].u op test__tmp[1].u,		\
+			test__tmp[0].u, test__tmp[1].u))
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b);
+
+/*
+ * Compare two chars. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_char(a, op, b)						\
+	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
+	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].c op test__tmp[1].c,		\
+			test__tmp[0].c, test__tmp[1].c))
+int check_char_loc(const char *loc, const char *check, int ok,
+		   char a, char b);
+
+/* Check whether two strings are equal. */
+#define check_str(a, b)							\
+	check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b);
+
+/*
+ * Wrap a check that is known to fail. If the check succeeds then the
+ * test will fail. Returns 1 if the check fails, 0 if it
+ * succeeds. For example:
+ *
+ *  TEST_TODO(check(0));
+ */
+#define TEST_TODO(check) \
+	(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
+
+/* Private helpers */
+
+#define TEST__STR(x) #x
+#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
+
+union test__tmp {
+	intmax_t i;
+	uintmax_t u;
+	char c;
+};
+
+extern union test__tmp test__tmp[2];
+
+int test__run_begin(void);
+__attribute__((format (printf, 3, 4)))
+int test__run_end(int, const char *, const char *, ...);
+void test__todo_begin(void);
+int test__todo_end(const char *, const char *, int);
+
+#endif /* TEST_LIB_H */
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v9 3/3] ci: run unit tests in CI
  2023-11-01 23:31   ` [PATCH v9 " Josh Steadmon
  2023-11-01 23:31     ` [PATCH v9 1/3] unit tests: Add a project plan document Josh Steadmon
  2023-11-01 23:31     ` [PATCH v9 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-11-01 23:31     ` Josh Steadmon
  2 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-01 23:31 UTC (permalink / raw)
  To: git
  Cc: gitster, phillip.wood123, rsbecker, oswald.buddenhagen,
	christian.couder

Run unit tests in both Cirrus and GitHub CI. For sharded CI instances
(currently just Windows on GitHub), run only on the first shard. This is
OK while we have only a single unit test executable, but we may wish to
distribute tests more evenly when we add new unit tests in the future.

We may also want to add more status output in our unit test framework,
so that we can do similar post-processing as in
ci/lib.sh:handle_failed_tests().

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 .cirrus.yml               | 2 +-
 ci/run-build-and-tests.sh | 2 ++
 ci/run-test-slice.sh      | 5 +++++
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 4860bebd32..b6280692d2 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -19,4 +19,4 @@ freebsd_12_task:
   build_script:
     - su git -c gmake
   test_script:
-    - su git -c 'gmake test'
+    - su git -c 'gmake DEFAULT_UNIT_TEST_TARGET=unit-tests-prove test unit-tests'
diff --git a/ci/run-build-and-tests.sh b/ci/run-build-and-tests.sh
index 2528f25e31..7a1466b868 100755
--- a/ci/run-build-and-tests.sh
+++ b/ci/run-build-and-tests.sh
@@ -50,6 +50,8 @@ if test -n "$run_tests"
 then
 	group "Run tests" make test ||
 	handle_failed_tests
+	group "Run unit tests" \
+		make DEFAULT_UNIT_TEST_TARGET=unit-tests-prove unit-tests
 fi
 check_unignored_build_artifacts
 
diff --git a/ci/run-test-slice.sh b/ci/run-test-slice.sh
index a3c67956a8..ae8094382f 100755
--- a/ci/run-test-slice.sh
+++ b/ci/run-test-slice.sh
@@ -15,4 +15,9 @@ group "Run tests" make --quiet -C t T="$(cd t &&
 	tr '\n' ' ')" ||
 handle_failed_tests
 
+# We only have one unit test at the moment, so run it in the first slice
+if [ "$1" == "0" ] ; then
+	group "Run unit tests" make --quiet -C t unit-tests-prove
+fi
+
 check_unignored_build_artifacts
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2.5/3] fixup! unit tests: add TAP unit test framework
  2023-11-01 17:54           ` Josh Steadmon
@ 2023-11-01 23:48             ` Junio C Hamano
  0 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-11-01 23:48 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: Phillip Wood, calvinwan, git, linusa, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> On 2023.10.16 09:41, Junio C Hamano wrote:
>> Phillip Wood <phillip.wood123@gmail.com> writes:
>> 
>> > From: Phillip Wood <phillip.wood@dunelm.org.uk>
>> >
>> > Here are a couple of cleanups for the unit test framework that I
>> > noticed.
>> 
>> Thanks.  I trust that this will be squashed into the next update,
>> but in the meantime, I'll include it in the copy of the series I
>> have (without squashing).  Here is another one I noticed.
>> 
>> ----- >8 --------- >8 --------- >8 -----
>> Subject: [PATCH] fixup! ci: run unit tests in CI
>> 
>> A CI job failed due to contrib/coccinelle/equals-null.cocci
>> and suggested this change, which seems sensible.
>> 
>> Signed-off-by: Junio C Hamano <gitster@pobox.com>
>> ---
>>  t/unit-tests/t-strbuf.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> Applied in v9, thanks!

Thanks for working well together.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 1/3] unit tests: Add a project plan document
  2023-11-01 17:47         ` Josh Steadmon
@ 2023-11-01 23:49           ` Junio C Hamano
  0 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-11-01 23:49 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: Christian Couder, git, phillip.wood123, linusa, calvinwan,
	rsbecker

Josh Steadmon <steadmon@google.com> writes:

> On 2023.10.27 22:12, Christian Couder wrote:
>> On Tue, Oct 10, 2023 at 12:22 AM Josh Steadmon <steadmon@google.com> wrote:
>> >
>> > In our current testing environment, we spend a significant amount of
>> > effort crafting end-to-end tests for error conditions that could easily
>> > be captured by unit tests (or we simply forgo some hard-to-setup and
>> > rare error conditions). Describe what we hope to accomplish by
>> > implementing unit tests, and explain some open questions and milestones.
>> > Discuss desired features for test frameworks/harnesses, and provide a
>> > preliminary comparison of several different frameworks.
>> 
>> Nit: Not sure why the test framework comparison is "preliminary" as we
>> have actually selected a unit test framework and are adding it in the
>> next patch of the series. I understand that this was perhaps written
>> before the choice was made, but maybe we might want to update that
>> now.
>
> Fixed in v9, thanks.

Thanks for working well together.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v8 2.5/3] fixup! unit tests: add TAP unit test framework
  2023-11-01 17:54         ` Josh Steadmon
@ 2023-11-01 23:49           ` Junio C Hamano
  0 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-11-01 23:49 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: Phillip Wood, calvinwan, git, linusa, phillip.wood123, rsbecker

Josh Steadmon <steadmon@google.com> writes:

> On 2023.10.16 14:43, Phillip Wood wrote:
>> From: Phillip Wood <phillip.wood@dunelm.org.uk>
>> 
>> Here are a couple of cleanups for the unit test framework that I
>> noticed.
>> 
>> Update the documentation of the example custom check to reflect the
>> change in return value of test_assert() and mention that
>> checks should be careful when dereferencing pointer arguments.
>> 
>> Also avoid evaluating macro augments twice in check_int() and
>> friends. The global variable test__tmp was introduced to avoid
>> evaluating the arguments to these macros more than once but the macros
>> failed to use it when passing the values being compared to
>> check_int_loc().
>> 
>> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
>> ---
>>  t/unit-tests/test-lib.h | 26 ++++++++++++++++----------
>>  1 file changed, 16 insertions(+), 10 deletions(-)
>
> Applied in v9, thanks!

Thanks for working well together.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v9 2/3] unit tests: add TAP unit test framework
  2023-11-01 23:31     ` [PATCH v9 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-11-03 21:54       ` Christian Couder
  2023-11-09 17:51         ` Josh Steadmon
  0 siblings, 1 reply; 67+ messages in thread
From: Christian Couder @ 2023-11-03 21:54 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: git, Junio C Hamano, Phillip Wood, Randall S. Becker,
	Oswald Buddenhagen

On Thu, Nov 2, 2023 at 12:31 AM Josh Steadmon <steadmon@google.com> wrote:
>
> From: Phillip Wood <phillip.wood@dunelm.org.uk>

> +int test_assert(const char *location, const char *check, int ok)
> +{
> +       assert(ctx.running);
> +
> +       if (ctx.result == RESULT_SKIP) {
> +               test_msg("skipping check '%s' at %s", check, location);
> +               return 1;
> +       } else if (!ctx.todo) {

I suggested removing the "else" and moving the "if (!ctx.todo) {" to
its own line in the previous round and thought you agreed with that,
but maybe it fell through the cracks somehow.

Anyway I think this is a minor nit, and the series looks good to me.

> +               if (ok) {
> +                       test_pass();
> +               } else {
> +                       test_msg("check \"%s\" failed at %s", check, location);
> +                       test_fail();
> +               }
> +       }
> +
> +       return !!ok;
> +}

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v9 2/3] unit tests: add TAP unit test framework
  2023-11-03 21:54       ` Christian Couder
@ 2023-11-09 17:51         ` Josh Steadmon
  0 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-09 17:51 UTC (permalink / raw)
  To: Christian Couder
  Cc: git, Junio C Hamano, Phillip Wood, Randall S. Becker,
	Oswald Buddenhagen

On 2023.11.03 22:54, Christian Couder wrote:
> On Thu, Nov 2, 2023 at 12:31 AM Josh Steadmon <steadmon@google.com> wrote:
> >
> > From: Phillip Wood <phillip.wood@dunelm.org.uk>
> 
> > +int test_assert(const char *location, const char *check, int ok)
> > +{
> > +       assert(ctx.running);
> > +
> > +       if (ctx.result == RESULT_SKIP) {
> > +               test_msg("skipping check '%s' at %s", check, location);
> > +               return 1;
> > +       } else if (!ctx.todo) {
> 
> I suggested removing the "else" and moving the "if (!ctx.todo) {" to
> its own line in the previous round and thought you agreed with that,
> but maybe it fell through the cracks somehow.
> 
> Anyway I think this is a minor nit, and the series looks good to me.

Ahh, sorry about that, I must have accidentally dropped a fixup patch at
some point. I'll correct that and send v10 soon.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v10 0/3] Add unit test framework and project plan
  2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
                     ` (6 preceding siblings ...)
  2023-11-01 23:31   ` [PATCH v9 " Josh Steadmon
@ 2023-11-09 18:50   ` Josh Steadmon
  2023-11-09 18:50     ` [PATCH v10 1/3] unit tests: Add a project plan document Josh Steadmon
                       ` (2 more replies)
  7 siblings, 3 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-09 18:50 UTC (permalink / raw)
  To: git; +Cc: gitster, phillip.wood123, oswald.buddenhagen, christian.couder

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This series begins with a project document covering our goals for adding
unit tests and a discussion of alternative frameworks considered, as
well as the features used to evaluate them. A rendered preview of this
doc can be found at [2]. It also adds Phillip Wood's TAP implemenation
(with some slightly re-worked Makefile rules) and a sample strbuf unit
test. Finally, we modify the configs for GitHub and Cirrus CI to run the
unit tests. Sample runs showing successful CI runs can be found at [3],
[4], and [5].

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc
[3] https://github.com/steadmon/git/actions/runs/5884659246/job/15959781385#step:4:1803
[4] https://github.com/steadmon/git/actions/runs/5884659246/job/15959938401#step:5:186
[5] https://cirrus-ci.com/task/6126304366428160 (unrelated tests failed,
    but note that t-strbuf ran successfully)

Changes in v10:
- Included a promised style cleanup in test-lib.c that was accidentally
  dropped in v9.

Changes in v9:
- Included some asciidoc cleanups suggested by Oswald Buddenhagen.
- Applied a style fixup that Coccinelle complained about.
- Applied some NULL-safety fixups.
- Used check_*() more widely in t-strbuf helper functions

Changes in v8:
- Flipped return values for TEST, TEST_TODO, and check_* macros &
  functions. This makes it easier to reason about control flow for
  patterns like:
    if (check(some_condition)) { ... }
- Moved unit test binaries to t/unit-tests/bin to simplify .gitignore
  patterns.
- Removed testing of some strbuf implementation details in t-strbuf.c

Changes in v7:
- Fix corrupt diff in patch #2, sorry for the noise.

Changes in v6:
- Officially recommend using Phillip Wood's TAP framework
- Add an example strbuf unit test using the TAP framework as well as
  Makefile integration
- Run unit tests in CI

Changes in v5:
- Add comparison point "License".
- Discuss feature priorities
- Drop frameworks:
  - Incompatible licenses: libtap, cmocka
  - Missing source: MyTAP
  - No TAP support: µnit, cmockery, cmockery2, Unity, minunit, CUnit
- Drop comparison point "Coverage reports": this can generally be
  handled by tools such as `gcov` regardless of the framework used.
- Drop comparison point "Inline tests": there didn't seem to be
  strong interest from reviewers for this feature.
- Drop comparison point "Scheduling / re-running": this was not
  supported by any of the main contenders, and is generally better
  handled by the harness rather than framework.
- Drop comparison point "Lazy test planning": this was supported by
  all frameworks that provide TAP output.

Changes in v4:
- Add link anchors for the framework comparison dimensions
- Explain "Partial" results for each dimension
- Use consistent dimension names in the section headers and comparison
  tables
- Add "Project KLOC", "Adoption", and "Inline tests" dimensions
- Fill in a few of the missing entries in the comparison table

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com


Josh Steadmon (2):
  unit tests: Add a project plan document
  ci: run unit tests in CI

Phillip Wood (1):
  unit tests: add TAP unit test framework

 .cirrus.yml                            |   2 +-
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 240 ++++++++++++++++++
 Makefile                               |  28 ++-
 ci/run-build-and-tests.sh              |   2 +
 ci/run-test-slice.sh                   |   5 +
 t/Makefile                             |  15 +-
 t/t0080-unit-test-output.sh            |  58 +++++
 t/unit-tests/.gitignore                |   1 +
 t/unit-tests/t-basic.c                 |  95 +++++++
 t/unit-tests/t-strbuf.c                | 120 +++++++++
 t/unit-tests/test-lib.c                | 330 +++++++++++++++++++++++++
 t/unit-tests/test-lib.h                | 149 +++++++++++
 13 files changed, 1041 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/technical/unit-tests.txt
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

Range-diff against v9:
-:  ---------- > 1:  f706ba9b68 unit tests: Add a project plan document
1:  8b831f4937 ! 2:  7a5e21bcff unit tests: add TAP unit test framework
    @@ t/unit-tests/test-lib.c (new)
     +	if (ctx.result == RESULT_SKIP) {
     +		test_msg("skipping check '%s' at %s", check, location);
     +		return 1;
    -+	} else if (!ctx.todo) {
    ++	}
    ++	if (!ctx.todo) {
     +		if (ok) {
     +			test_pass();
     +		} else {
2:  08d27bb5f9 = 3:  0129ec062c ci: run unit tests in CI

base-commit: a9e066fa63149291a55f383cfa113d8bdbdaa6b3
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v10 1/3] unit tests: Add a project plan document
  2023-11-09 18:50   ` [PATCH v10 0/3] Add unit test framework and project plan Josh Steadmon
@ 2023-11-09 18:50     ` Josh Steadmon
  2023-11-09 23:15       ` Junio C Hamano
  2023-11-09 18:50     ` [PATCH v10 2/3] unit tests: add TAP unit test framework Josh Steadmon
  2023-11-09 18:50     ` [PATCH v10 3/3] ci: run unit tests in CI Josh Steadmon
  2 siblings, 1 reply; 67+ messages in thread
From: Josh Steadmon @ 2023-11-09 18:50 UTC (permalink / raw)
  To: git; +Cc: gitster, phillip.wood123, oswald.buddenhagen, christian.couder

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
comparison of several different frameworks. Finally, document our
rationale for implementing a custom framework.

Co-authored-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 240 +++++++++++++++++++++++++
 2 files changed, 241 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..206037ffb1
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,240 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+For now, we will evaluate projects solely on their framework features. Since we
+are relying on having TAP output (see below), we can assume that any framework
+can be made to work with a harness that we can choose later.
+
+
+== Summary
+
+We believe the best way forward is to implement a custom TAP framework for the
+Git project. We use a version of the framework originally proposed in
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
+
+See the <<framework-selection,Framework Selection>> section below for the
+rationale behind this decision.
+
+
+== Choosing a test harness
+
+During upstream discussion, it was occasionally noted that `prove` provides many
+convenient features, such as scheduling slower tests first, or re-running
+previously failed tests.
+
+While we already support the use of `prove` as a test harness for the shell
+tests, it is not strictly required. The t/Makefile allows running shell tests
+directly (though with interleaved output if parallelism is enabled). Git
+developers who wish to use `prove` as a more advanced harness can do so by
+setting DEFAULT_TEST_TARGET=prove in their config.mak.
+
+We will follow a similar approach for unit tests: by default the test
+executables will be run directly from the t/Makefile, but `prove` can be
+configured with DEFAULT_UNIT_TEST_TARGET=prove.
+
+
+[[framework-selection]]
+== Framework selection
+
+There are a variety of features we can use to rank the candidate frameworks, and
+those features have different priorities:
+
+* Critical features: we probably won't consider a framework without these
+** Can we legally / easily use the project?
+*** <<license,License>>
+*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
+*** <<maintainable-extensible,Maintainable / extensible>>
+*** <<major-platform-support,Major platform support>>
+** Does the project support our bare-minimum needs?
+*** <<tap-support,TAP support>>
+*** <<diagnostic-output,Diagnostic output>>
+*** <<runtime-skippable-tests,Runtime-skippable tests>>
+* Nice-to-have features:
+** <<parallel-execution,Parallel execution>>
+** <<mock-support,Mock support>>
+** <<signal-error-handling,Signal & error-handling>>
+* Tie-breaker stats
+** <<project-kloc,Project KLOC>>
+** <<adoption,Adoption>>
+
+[[license]]
+=== License
+
+We must be able to legally use the framework in connection with Git. As Git is
+licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
+projects.
+
+[[vendorable-or-ubiquitous]]
+=== Vendorable or ubiquitous
+
+We want to avoid forcing Git developers to install new tools just to run unit
+tests. Any prospective frameworks and harnesses must either be vendorable
+(meaning, we can copy their source directly into Git's repository), or so
+ubiquitous that it is reasonable to expect that most developers will have the
+tools installed already.
+
+[[maintainable-extensible]]
+=== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+In the comparison table below, "True" means that the framework seems to have
+active developers, that it is simple enough that Git developers can make changes
+to it, and that the project seems open to accepting external contributions (or
+that it is vendorable). "Partial" means that at least one of the above
+conditions holds.
+
+[[major-platform-support]]
+=== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+In the comparison table below, "True" means that it works on all three major
+platforms with no issues. "Partial" means that there may be annoyances on one or
+more platforms, but it is still usable in principle.
+
+[[tap-support]]
+=== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+In the comparison table below, "True" means this is natively supported.
+"Partial" means TAP output must be generated by post-processing the native
+output.
+
+Frameworks that do not have at least Partial support will not be evaluated
+further.
+
+[[diagnostic-output]]
+=== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+[[runtime-skippable-tests]]
+=== Runtime-skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+[[parallel-execution]]
+=== Parallel execution
+
+Ideally, we will build up a significant collection of unit test cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+In the comparison table below, "True" means that individual test cases within a
+single test executable can be run in parallel. We assume that executable-level
+parallelism can be handled by the test harness.
+
+[[mock-support]]
+=== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+[[signal-error-handling]]
+=== Signal & error handling
+
+The test framework should fail gracefully when test cases are themselves buggy
+or when they are interrupted by signals during runtime.
+
+[[project-kloc]]
+=== Project KLOC
+
+The size of the project, in thousands of lines of code as measured by
+https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
+1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
+
+[[adoption]]
+=== Adoption
+
+As a tie-breaker, we prefer a more widely-used project. We use the number of
+GitHub / GitLab stars to estimate this.
+
+
+=== Comparison
+
+:true: [lime-background]#True#
+:false: [red-background]#False#
+:partial: [yellow-background]#Partial#
+
+:gpl: [lime-background]#GPL v2#
+:isc: [lime-background]#ISC#
+:mit: [lime-background]#MIT#
+:expat: [lime-background]#Expat#
+:lgpl: [lime-background]#LGPL v2.1#
+
+:custom-impl: https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.]
+:greatest: https://github.com/silentbicycle/greatest[Greatest]
+:criterion: https://github.com/Snaipe/Criterion[Criterion]
+:c-tap: https://github.com/rra/c-tap-harness/[C TAP]
+:check: https://libcheck.github.io/check/[Check]
+
+[format="csv",options="header",width="33%",subs="specialcharacters,attributes,quotes,macros"]
+|=====
+Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
+{custom-impl},{gpl},{true},{true},{true},{true},{true},{true},{false},{false},{false},1,0
+{greatest},{isc},{true},{partial},{true},{partial},{true},{true},{false},{false},{false},3,1400
+{criterion},{mit},{false},{partial},{true},{true},{true},{true},{true},{false},{true},19,1800
+{c-tap},{expat},{true},{partial},{partial},{true},{false},{true},{false},{false},{false},4,33
+{check},{lgpl},{false},{partial},{true},{true},{true},{false},{false},{false},{true},17,973
+|=====
+
+=== Additional framework candidates
+
+Several suggested frameworks have been eliminated from consideration:
+
+* Incompatible licenses:
+** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
+** https://cmocka.org/[cmocka] (Apache 2.0)
+* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
+* No TAP support:
+** https://nemequ.github.io/munit/[µnit]
+** https://github.com/google/cmockery[cmockery]
+** https://github.com/lpabon/cmockery2[cmockery2]
+** https://github.com/ThrowTheSwitch/Unity[Unity]
+** https://github.com/siu/minunit[minunit]
+** https://cunit.sourceforge.net/[CUnit]
+
+
+== Milestones
+
+* Add useful tests of library-like code
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v10 2/3] unit tests: add TAP unit test framework
  2023-11-09 18:50   ` [PATCH v10 0/3] Add unit test framework and project plan Josh Steadmon
  2023-11-09 18:50     ` [PATCH v10 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-11-09 18:50     ` Josh Steadmon
  2023-11-09 18:50     ` [PATCH v10 3/3] ci: run unit tests in CI Josh Steadmon
  2 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-09 18:50 UTC (permalink / raw)
  To: git; +Cc: gitster, phillip.wood123, oswald.buddenhagen, christian.couder

From: Phillip Wood <phillip.wood@dunelm.org.uk>

This patch contains an implementation for writing unit tests with TAP
output. Each test is a function that contains one or more checks. The
test is run with the TEST() macro and if any of the checks fail then the
test will fail. A complete program that tests STRBUF_INIT would look
like

     #include "test-lib.h"
     #include "strbuf.h"

     static void t_static_init(void)
     {
             struct strbuf buf = STRBUF_INIT;

             check_uint(buf.len, ==, 0);
             check_uint(buf.alloc, ==, 0);
             check_char(buf.buf[0], ==, '\0');
     }

     int main(void)
     {
             TEST(t_static_init(), "static initialization works);

             return test_done();
     }

The output of this program would be

     ok 1 - static initialization works
     1..1

If any of the checks in a test fail then they print a diagnostic message
to aid debugging and the test will be reported as failing. For example a
failing integer check would look like

     # check "x >= 3" failed at my-test.c:102
     #    left: 2
     #   right: 3
     not ok 1 - x is greater than or equal to three

There are a number of check functions implemented so far. check() checks
a boolean condition, check_int(), check_uint() and check_char() take two
values to compare and a comparison operator. check_str() will check if
two strings are equal. Custom checks are simple to implement as shown in
the comments above test_assert() in test-lib.h.

Tests can be skipped with test_skip() which can be supplied with a
reason for skipping which it will print. Tests can print diagnostic
messages with test_msg().  Checks that are known to fail can be wrapped
in TEST_TODO().

There are a couple of example test programs included in this
patch. t-basic.c implements some self-tests and demonstrates the
diagnostic output for failing test. The output of this program is
checked by t0080-unit-test-output.sh. t-strbuf.c shows some example
unit tests for strbuf.c

The unit tests will be built as part of the default "make all" target,
to avoid bitrot. If you wish to build just the unit tests, you can run
"make build-unit-tests". To run the tests, you can use "make unit-tests"
or run the test binaries directly, as in "./t/unit-tests/bin/t-strbuf".

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 Makefile                    |  28 ++-
 t/Makefile                  |  15 +-
 t/t0080-unit-test-output.sh |  58 +++++++
 t/unit-tests/.gitignore     |   1 +
 t/unit-tests/t-basic.c      |  95 +++++++++++
 t/unit-tests/t-strbuf.c     | 120 +++++++++++++
 t/unit-tests/test-lib.c     | 330 ++++++++++++++++++++++++++++++++++++
 t/unit-tests/test-lib.h     | 149 ++++++++++++++++
 8 files changed, 792 insertions(+), 4 deletions(-)
 create mode 100755 t/t0080-unit-test-output.sh
 create mode 100644 t/unit-tests/.gitignore
 create mode 100644 t/unit-tests/t-basic.c
 create mode 100644 t/unit-tests/t-strbuf.c
 create mode 100644 t/unit-tests/test-lib.c
 create mode 100644 t/unit-tests/test-lib.h

diff --git a/Makefile b/Makefile
index e440728c24..18c13f06c0 100644
--- a/Makefile
+++ b/Makefile
@@ -682,6 +682,9 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests
+UNIT_TEST_BIN = $(UNIT_TEST_DIR)/bin
 
 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1331,6 +1334,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
 THIRD_PARTY_SOURCES += sha1collisiondetection/%
 THIRD_PARTY_SOURCES += sha1dc/%
 
+UNIT_TEST_PROGRAMS += t-basic
+UNIT_TEST_PROGRAMS += t-strbuf
+UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_BIN)/%$X,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
+UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
+
 # xdiff and reftable libs may in turn depend on what is in libgit.a
 GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
 EXTLIBS =
@@ -2672,6 +2681,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(UNIT_TEST_OBJS)
 
 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3167,7 +3177,7 @@ endif
 
 test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
 
-all:: $(TEST_PROGRAMS) $(test_bindir_programs)
+all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
 
 bin-wrappers/%: wrap-for-bin.sh
 	$(call mkdir_p_parent_template)
@@ -3592,7 +3602,7 @@ endif
 
 artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
-		$(MOFILES)
+		$(UNIT_TEST_PROGS) $(MOFILES)
 	$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
 		SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
 	test -n "$(ARTIFACTS_DIRECTORY)"
@@ -3653,7 +3663,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3831,3 +3841,15 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
 
 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_BIN):
+	@mkdir -p $(UNIT_TEST_BIN)
+
+$(UNIT_TEST_PROGS): $(UNIT_TEST_BIN)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS $(UNIT_TEST_BIN)
+	$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
+		$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGS)
+unit-tests: $(UNIT_TEST_PROGS)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..75d9330437 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -17,6 +17,7 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+DEFAULT_UNIT_TEST_TARGET ?= unit-tests-raw
 TEST_LINT ?= test-lint
 
 ifdef TEST_OUTPUT_DIRECTORY
@@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(filter-out unit-tests/bin/t-basic%,$(wildcard unit-tests/bin/t-*)))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +67,17 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
 
+$(UNIT_TESTS):
+	@echo "*** $@ ***"; $@
+
+.PHONY: unit-tests unit-tests-raw unit-tests-prove
+unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
+
+unit-tests-raw: $(UNIT_TESTS)
+
+unit-tests-prove:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
@@ -149,4 +162,4 @@ perf:
 	$(MAKE) -C perf/ all
 
 .PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
-	check-chainlint clean-chainlint test-chainlint
+	check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)
diff --git a/t/t0080-unit-test-output.sh b/t/t0080-unit-test-output.sh
new file mode 100755
index 0000000000..961b54b06c
--- /dev/null
+++ b/t/t0080-unit-test-output.sh
@@ -0,0 +1,58 @@
+#!/bin/sh
+
+test_description='Test the output of the unit test framework'
+
+. ./test-lib.sh
+
+test_expect_success 'TAP output from unit tests' '
+	cat >expect <<-EOF &&
+	ok 1 - passing test
+	ok 2 - passing test and assertion return 1
+	# check "1 == 2" failed at t/unit-tests/t-basic.c:76
+	#    left: 1
+	#   right: 2
+	not ok 3 - failing test
+	ok 4 - failing test and assertion return 0
+	not ok 5 - passing TEST_TODO() # TODO
+	ok 6 - passing TEST_TODO() returns 1
+	# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
+	not ok 7 - failing TEST_TODO()
+	ok 8 - failing TEST_TODO() returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:30
+	# skipping test - missing prerequisite
+	# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
+	ok 9 - test_skip() # SKIP
+	ok 10 - skipped test returns 1
+	# skipping test - missing prerequisite
+	ok 11 - test_skip() inside TEST_TODO() # SKIP
+	ok 12 - test_skip() inside TEST_TODO() returns 1
+	# check "0" failed at t/unit-tests/t-basic.c:48
+	not ok 13 - TEST_TODO() after failing check
+	ok 14 - TEST_TODO() after failing check returns 0
+	# check "0" failed at t/unit-tests/t-basic.c:56
+	not ok 15 - failing check after TEST_TODO()
+	ok 16 - failing check after TEST_TODO() returns 0
+	# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
+	#    left: "\011hello\\\\"
+	#   right: "there\"\012"
+	# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
+	#    left: "NULL"
+	#   right: NULL
+	# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
+	#    left: ${SQ}a${SQ}
+	#   right: ${SQ}\012${SQ}
+	# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
+	#    left: ${SQ}\\\\${SQ}
+	#   right: ${SQ}\\${SQ}${SQ}
+	not ok 17 - messages from failing string and char comparison
+	# BUG: test has no checks at t/unit-tests/t-basic.c:91
+	not ok 18 - test with no checks
+	ok 19 - test with no checks returns 0
+	1..19
+	EOF
+
+	! "$GIT_BUILD_DIR"/t/unit-tests/bin/t-basic >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/unit-tests/.gitignore b/t/unit-tests/.gitignore
new file mode 100644
index 0000000000..5e56e040ec
--- /dev/null
+++ b/t/unit-tests/.gitignore
@@ -0,0 +1 @@
+/bin
diff --git a/t/unit-tests/t-basic.c b/t/unit-tests/t-basic.c
new file mode 100644
index 0000000000..fda1ae59a6
--- /dev/null
+++ b/t/unit-tests/t-basic.c
@@ -0,0 +1,95 @@
+#include "test-lib.h"
+
+/*
+ * The purpose of this "unit test" is to verify a few invariants of the unit
+ * test framework itself, as well as to provide examples of output from actually
+ * failing tests. As such, it is intended that this test fails, and thus it
+ * should not be run as part of `make unit-tests`. Instead, we verify it behaves
+ * as expected in the integration test t0080-unit-test-output.sh
+ */
+
+/* Used to store the return value of check_int(). */
+static int check_res;
+
+/* Used to store the return value of TEST(). */
+static int test_res;
+
+static void t_res(int expect)
+{
+	check_int(check_res, ==, expect);
+	check_int(test_res, ==, expect);
+}
+
+static void t_todo(int x)
+{
+	check_res = TEST_TODO(check(x));
+}
+
+static void t_skip(void)
+{
+	check(0);
+	test_skip("missing prerequisite");
+	check(1);
+}
+
+static int do_skip(void)
+{
+	test_skip("missing prerequisite");
+	return 1;
+}
+
+static void t_skip_todo(void)
+{
+	check_res = TEST_TODO(do_skip());
+}
+
+static void t_todo_after_fail(void)
+{
+	check(0);
+	TEST_TODO(check(0));
+}
+
+static void t_fail_after_todo(void)
+{
+	check(1);
+	TEST_TODO(check(0));
+	check(0);
+}
+
+static void t_messages(void)
+{
+	check_str("\thello\\", "there\"\n");
+	check_str("NULL", NULL);
+	check_char('a', ==, '\n');
+	check_char('\\', ==, '\'');
+}
+
+static void t_empty(void)
+{
+	; /* empty */
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
+	TEST(t_res(1), "passing test and assertion return 1");
+	test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
+	TEST(t_res(0), "failing test and assertion return 0");
+	test_res = TEST(t_todo(0), "passing TEST_TODO()");
+	TEST(t_res(1), "passing TEST_TODO() returns 1");
+	test_res = TEST(t_todo(1), "failing TEST_TODO()");
+	TEST(t_res(0), "failing TEST_TODO() returns 0");
+	test_res = TEST(t_skip(), "test_skip()");
+	TEST(check_int(test_res, ==, 1), "skipped test returns 1");
+	test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
+	TEST(t_res(1), "test_skip() inside TEST_TODO() returns 1");
+	test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
+	TEST(check_int(test_res, ==, 0), "TEST_TODO() after failing check returns 0");
+	test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
+	TEST(check_int(test_res, ==, 0), "failing check after TEST_TODO() returns 0");
+	TEST(t_messages(), "messages from failing string and char comparison");
+	test_res = TEST(t_empty(), "test with no checks");
+	TEST(check_int(test_res, ==, 0), "test with no checks returns 0");
+
+	return test_done();
+}
diff --git a/t/unit-tests/t-strbuf.c b/t/unit-tests/t-strbuf.c
new file mode 100644
index 0000000000..de434a4441
--- /dev/null
+++ b/t/unit-tests/t-strbuf.c
@@ -0,0 +1,120 @@
+#include "test-lib.h"
+#include "strbuf.h"
+
+/* wrapper that supplies tests with an empty, initialized strbuf */
+static void setup(void (*f)(struct strbuf*, void*), void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+}
+
+/* wrapper that supplies tests with a populated, initialized strbuf */
+static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, void *data)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	strbuf_addstr(&buf, init_str);
+	check_uint(buf.len, ==, strlen(init_str));
+	f(&buf, data);
+	strbuf_release(&buf);
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+}
+
+static int assert_sane_strbuf(struct strbuf *buf)
+{
+	/* Initialized strbufs should always have a non-NULL buffer */
+	if (!check(!!buf->buf))
+		return 0;
+	/* Buffers should always be NUL-terminated */
+	if (!check_char(buf->buf[buf->len], ==, '\0'))
+		return 0;
+	/*
+	 * Freshly-initialized strbufs may not have a dynamically allocated
+	 * buffer
+	 */
+	if (buf->len == 0 && buf->alloc == 0)
+		return 1;
+	/* alloc must be at least one byte larger than len */
+	return check_uint(buf->len, <, buf->alloc);
+}
+
+static void t_static_init(void)
+{
+	struct strbuf buf = STRBUF_INIT;
+
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, ==, 0);
+	check_char(buf.buf[0], ==, '\0');
+}
+
+static void t_dynamic_init(void)
+{
+	struct strbuf buf;
+
+	strbuf_init(&buf, 1024);
+	check(assert_sane_strbuf(&buf));
+	check_uint(buf.len, ==, 0);
+	check_uint(buf.alloc, >=, 1024);
+	check_char(buf.buf[0], ==, '\0');
+	strbuf_release(&buf);
+}
+
+static void t_addch(struct strbuf *buf, void *data)
+{
+	const char *p_ch = data;
+	const char ch = *p_ch;
+	size_t orig_alloc = buf->alloc;
+	size_t orig_len = buf->len;
+
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	strbuf_addch(buf, ch);
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	if (!(check_uint(buf->len, ==, orig_len + 1) &&
+	      check_uint(buf->alloc, >=, orig_alloc)))
+		return; /* avoid de-referencing buf->buf */
+	check_char(buf->buf[buf->len - 1], ==, ch);
+	check_char(buf->buf[buf->len], ==, '\0');
+}
+
+static void t_addstr(struct strbuf *buf, void *data)
+{
+	const char *text = data;
+	size_t len = strlen(text);
+	size_t orig_alloc = buf->alloc;
+	size_t orig_len = buf->len;
+
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	strbuf_addstr(buf, text);
+	if (!check(assert_sane_strbuf(buf)))
+		return;
+	if (!(check_uint(buf->len, ==, orig_len + len) &&
+	      check_uint(buf->alloc, >=, orig_alloc) &&
+	      check_uint(buf->alloc, >, orig_len + len) &&
+	      check_char(buf->buf[orig_len + len], ==, '\0')))
+	    return;
+	check_str(buf->buf + orig_len, text);
+}
+
+int cmd_main(int argc, const char **argv)
+{
+	if (!TEST(t_static_init(), "static initialization works"))
+		test_skip_all("STRBUF_INIT is broken");
+	TEST(t_dynamic_init(), "dynamic initialization works");
+	TEST(setup(t_addch, "a"), "strbuf_addch adds char");
+	TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
+	TEST(setup_populated(t_addch, "initial value", "a"),
+	     "strbuf_addch appends to initial value");
+	TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
+	TEST(setup_populated(t_addstr, "initial value", "hello there"),
+	     "strbuf_addstr appends string to initial value");
+
+	return test_done();
+}
diff --git a/t/unit-tests/test-lib.c b/t/unit-tests/test-lib.c
new file mode 100644
index 0000000000..a2cc21c706
--- /dev/null
+++ b/t/unit-tests/test-lib.c
@@ -0,0 +1,330 @@
+#include "test-lib.h"
+
+enum result {
+	RESULT_NONE,
+	RESULT_FAILURE,
+	RESULT_SKIP,
+	RESULT_SUCCESS,
+	RESULT_TODO
+};
+
+static struct {
+	enum result result;
+	int count;
+	unsigned failed :1;
+	unsigned lazy_plan :1;
+	unsigned running :1;
+	unsigned skip_all :1;
+	unsigned todo :1;
+} ctx = {
+	.lazy_plan = 1,
+	.result = RESULT_NONE,
+};
+
+static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
+{
+	fflush(stderr);
+	if (prefix)
+		fprintf(stdout, "%s", prefix);
+	vprintf(format, ap); /* TODO: handle newlines */
+	putc('\n', stdout);
+	fflush(stdout);
+}
+
+void test_msg(const char *format, ...)
+{
+	va_list ap;
+
+	va_start(ap, format);
+	msg_with_prefix("# ", format, ap);
+	va_end(ap);
+}
+
+void test_plan(int count)
+{
+	assert(!ctx.running);
+
+	fflush(stderr);
+	printf("1..%d\n", count);
+	fflush(stdout);
+	ctx.lazy_plan = 0;
+}
+
+int test_done(void)
+{
+	assert(!ctx.running);
+
+	if (ctx.lazy_plan)
+		test_plan(ctx.count);
+
+	return ctx.failed;
+}
+
+void test_skip(const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	if (format)
+		msg_with_prefix("# skipping test - ", format, ap);
+	va_end(ap);
+}
+
+void test_skip_all(const char *format, ...)
+{
+	va_list ap;
+	const char *prefix;
+
+	if (!ctx.count && ctx.lazy_plan) {
+		/* We have not printed a test plan yet */
+		prefix = "1..0 # SKIP ";
+		ctx.lazy_plan = 0;
+	} else {
+		/* We have already printed a test plan */
+		prefix = "Bail out! # ";
+		ctx.failed = 1;
+	}
+	ctx.skip_all = 1;
+	ctx.result = RESULT_SKIP;
+	va_start(ap, format);
+	msg_with_prefix(prefix, format, ap);
+	va_end(ap);
+}
+
+int test__run_begin(void)
+{
+	assert(!ctx.running);
+
+	ctx.count++;
+	ctx.result = RESULT_NONE;
+	ctx.running = 1;
+
+	return ctx.skip_all;
+}
+
+static void print_description(const char *format, va_list ap)
+{
+	if (format) {
+		fputs(" - ", stdout);
+		vprintf(format, ap);
+	}
+}
+
+int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
+{
+	va_list ap;
+
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	fflush(stderr);
+	va_start(ap, format);
+	if (!ctx.skip_all) {
+		switch (ctx.result) {
+		case RESULT_SUCCESS:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_FAILURE:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			break;
+
+		case RESULT_TODO:
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # TODO");
+			break;
+
+		case RESULT_SKIP:
+			printf("ok %d", ctx.count);
+			print_description(format, ap);
+			printf(" # SKIP");
+			break;
+
+		case RESULT_NONE:
+			test_msg("BUG: test has no checks at %s", location);
+			printf("not ok %d", ctx.count);
+			print_description(format, ap);
+			ctx.result = RESULT_FAILURE;
+			break;
+		}
+	}
+	va_end(ap);
+	ctx.running = 0;
+	if (ctx.skip_all)
+		return 1;
+	putc('\n', stdout);
+	fflush(stdout);
+	ctx.failed |= ctx.result == RESULT_FAILURE;
+
+	return ctx.result != RESULT_FAILURE;
+}
+
+static void test_fail(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	ctx.result = RESULT_FAILURE;
+}
+
+static void test_pass(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result == RESULT_NONE)
+		ctx.result = RESULT_SUCCESS;
+}
+
+static void test_todo(void)
+{
+	assert(ctx.result != RESULT_SKIP);
+
+	if (ctx.result != RESULT_FAILURE)
+		ctx.result = RESULT_TODO;
+}
+
+int test_assert(const char *location, const char *check, int ok)
+{
+	assert(ctx.running);
+
+	if (ctx.result == RESULT_SKIP) {
+		test_msg("skipping check '%s' at %s", check, location);
+		return 1;
+	}
+	if (!ctx.todo) {
+		if (ok) {
+			test_pass();
+		} else {
+			test_msg("check \"%s\" failed at %s", check, location);
+			test_fail();
+		}
+	}
+
+	return !!ok;
+}
+
+void test__todo_begin(void)
+{
+	assert(ctx.running);
+	assert(!ctx.todo);
+
+	ctx.todo = 1;
+}
+
+int test__todo_end(const char *location, const char *check, int res)
+{
+	assert(ctx.running);
+	assert(ctx.todo);
+
+	ctx.todo = 0;
+	if (ctx.result == RESULT_SKIP)
+		return 1;
+	if (res) {
+		test_msg("todo check '%s' succeeded at %s", check, location);
+		test_fail();
+	} else {
+		test_todo();
+	}
+
+	return !res;
+}
+
+int check_bool_loc(const char *loc, const char *check, int ok)
+{
+	return test_assert(loc, check, ok);
+}
+
+union test__tmp test__tmp[2];
+
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		test_msg("   left: %"PRIdMAX, a);
+		test_msg("  right: %"PRIdMAX, b);
+	}
+
+	return ret;
+}
+
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		test_msg("   left: %"PRIuMAX, a);
+		test_msg("  right: %"PRIuMAX, b);
+	}
+
+	return ret;
+}
+
+static void print_one_char(char ch, char quote)
+{
+	if ((unsigned char)ch < 0x20u || ch == 0x7f) {
+		/* TODO: improve handling of \a, \b, \f ... */
+		printf("\\%03o", (unsigned char)ch);
+	} else {
+		if (ch == '\\' || ch == quote)
+			putc('\\', stdout);
+		putc(ch, stdout);
+	}
+}
+
+static void print_char(const char *prefix, char ch)
+{
+	printf("# %s: '", prefix);
+	print_one_char(ch, '\'');
+	fputs("'\n", stdout);
+}
+
+int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
+{
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		fflush(stderr);
+		print_char("   left", a);
+		print_char("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
+
+static void print_str(const char *prefix, const char *str)
+{
+	printf("# %s: ", prefix);
+	if (!str) {
+		fputs("NULL\n", stdout);
+	} else {
+		putc('"', stdout);
+		while (*str)
+			print_one_char(*str++, '"');
+		fputs("\"\n", stdout);
+	}
+}
+
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b)
+{
+	int ok = (!a && !b) || (a && b && !strcmp(a, b));
+	int ret = test_assert(loc, check, ok);
+
+	if (!ret) {
+		fflush(stderr);
+		print_str("   left", a);
+		print_str("  right", b);
+		fflush(stdout);
+	}
+
+	return ret;
+}
diff --git a/t/unit-tests/test-lib.h b/t/unit-tests/test-lib.h
new file mode 100644
index 0000000000..a8f07ae0b7
--- /dev/null
+++ b/t/unit-tests/test-lib.h
@@ -0,0 +1,149 @@
+#ifndef TEST_LIB_H
+#define TEST_LIB_H
+
+#include "git-compat-util.h"
+
+/*
+ * Run a test function, returns 1 if the test succeeds, 0 if it
+ * fails. If test_skip_all() has been called then the test will not be
+ * run. The description for each test should be unique. For example:
+ *
+ *  TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
+ */
+#define TEST(t, ...)					\
+	test__run_end(test__run_begin() ? 0 : (t, 1),	\
+		      TEST_LOCATION(),  __VA_ARGS__)
+
+/*
+ * Print a test plan, should be called before any tests. If the number
+ * of tests is not known in advance test_done() will automatically
+ * print a plan at the end of the test program.
+ */
+void test_plan(int count);
+
+/*
+ * test_done() must be called at the end of main(). It will print the
+ * plan if plan() was not called at the beginning of the test program
+ * and returns the exit code for the test program.
+ */
+int test_done(void);
+
+/* Skip the current test. */
+__attribute__((format (printf, 1, 2)))
+void test_skip(const char *format, ...);
+
+/* Skip all remaining tests. */
+__attribute__((format (printf, 1, 2)))
+void test_skip_all(const char *format, ...);
+
+/* Print a diagnostic message to stdout. */
+__attribute__((format (printf, 1, 2)))
+void test_msg(const char *format, ...);
+
+/*
+ * Test checks are built around test_assert(). checks return 1 on
+ * success, 0 on failure. If any check fails then the test will fail. To
+ * create a custom check define a function that wraps test_assert() and
+ * a macro to wrap that function to provide a source location and
+ * stringified arguments. Custom checks that take pointer arguments
+ * should be careful to check that they are non-NULL before
+ * dereferencing them. For example:
+ *
+ *  static int check_oid_loc(const char *loc, const char *check,
+ *			     struct object_id *a, struct object_id *b)
+ *  {
+ *	    int res = test_assert(loc, check, a && b && oideq(a, b));
+ *
+ *	    if (!res) {
+ *		    test_msg("   left: %s", a ? oid_to_hex(a) : "NULL";
+ *		    test_msg("  right: %s", b ? oid_to_hex(a) : "NULL";
+ *
+ *	    }
+ *	    return res;
+ *  }
+ *
+ *  #define check_oid(a, b) \
+ *	    check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
+ */
+int test_assert(const char *location, const char *check, int ok);
+
+/* Helper macro to pass the location to checks */
+#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
+
+/* Check a boolean condition. */
+#define check(x)				\
+	check_bool_loc(TEST_LOCATION(), #x, x)
+int check_bool_loc(const char *loc, const char *check, int ok);
+
+/*
+ * Compare two integers. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_int(a, op, b)						\
+	(test__tmp[0].i = (a), test__tmp[1].i = (b),			\
+	 check_int_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+		       test__tmp[0].i op test__tmp[1].i,		\
+		       test__tmp[0].i, test__tmp[1].i))
+int check_int_loc(const char *loc, const char *check, int ok,
+		  intmax_t a, intmax_t b);
+
+/*
+ * Compare two unsigned integers. Prints a message with the two values
+ * if the comparison fails. NB this is not thread safe.
+ */
+#define check_uint(a, op, b)						\
+	(test__tmp[0].u = (a), test__tmp[1].u = (b),			\
+	 check_uint_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].u op test__tmp[1].u,		\
+			test__tmp[0].u, test__tmp[1].u))
+int check_uint_loc(const char *loc, const char *check, int ok,
+		   uintmax_t a, uintmax_t b);
+
+/*
+ * Compare two chars. Prints a message with the two values if the
+ * comparison fails. NB this is not thread safe.
+ */
+#define check_char(a, op, b)						\
+	(test__tmp[0].c = (a), test__tmp[1].c = (b),			\
+	 check_char_loc(TEST_LOCATION(), #a" "#op" "#b,			\
+			test__tmp[0].c op test__tmp[1].c,		\
+			test__tmp[0].c, test__tmp[1].c))
+int check_char_loc(const char *loc, const char *check, int ok,
+		   char a, char b);
+
+/* Check whether two strings are equal. */
+#define check_str(a, b)							\
+	check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
+int check_str_loc(const char *loc, const char *check,
+		  const char *a, const char *b);
+
+/*
+ * Wrap a check that is known to fail. If the check succeeds then the
+ * test will fail. Returns 1 if the check fails, 0 if it
+ * succeeds. For example:
+ *
+ *  TEST_TODO(check(0));
+ */
+#define TEST_TODO(check) \
+	(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
+
+/* Private helpers */
+
+#define TEST__STR(x) #x
+#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
+
+union test__tmp {
+	intmax_t i;
+	uintmax_t u;
+	char c;
+};
+
+extern union test__tmp test__tmp[2];
+
+int test__run_begin(void);
+__attribute__((format (printf, 3, 4)))
+int test__run_end(int, const char *, const char *, ...);
+void test__todo_begin(void);
+int test__todo_end(const char *, const char *, int);
+
+#endif /* TEST_LIB_H */
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v10 3/3] ci: run unit tests in CI
  2023-11-09 18:50   ` [PATCH v10 0/3] Add unit test framework and project plan Josh Steadmon
  2023-11-09 18:50     ` [PATCH v10 1/3] unit tests: Add a project plan document Josh Steadmon
  2023-11-09 18:50     ` [PATCH v10 2/3] unit tests: add TAP unit test framework Josh Steadmon
@ 2023-11-09 18:50     ` Josh Steadmon
  2 siblings, 0 replies; 67+ messages in thread
From: Josh Steadmon @ 2023-11-09 18:50 UTC (permalink / raw)
  To: git; +Cc: gitster, phillip.wood123, oswald.buddenhagen, christian.couder

Run unit tests in both Cirrus and GitHub CI. For sharded CI instances
(currently just Windows on GitHub), run only on the first shard. This is
OK while we have only a single unit test executable, but we may wish to
distribute tests more evenly when we add new unit tests in the future.

We may also want to add more status output in our unit test framework,
so that we can do similar post-processing as in
ci/lib.sh:handle_failed_tests().

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
 .cirrus.yml               | 2 +-
 ci/run-build-and-tests.sh | 2 ++
 ci/run-test-slice.sh      | 5 +++++
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index 4860bebd32..b6280692d2 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -19,4 +19,4 @@ freebsd_12_task:
   build_script:
     - su git -c gmake
   test_script:
-    - su git -c 'gmake test'
+    - su git -c 'gmake DEFAULT_UNIT_TEST_TARGET=unit-tests-prove test unit-tests'
diff --git a/ci/run-build-and-tests.sh b/ci/run-build-and-tests.sh
index 2528f25e31..7a1466b868 100755
--- a/ci/run-build-and-tests.sh
+++ b/ci/run-build-and-tests.sh
@@ -50,6 +50,8 @@ if test -n "$run_tests"
 then
 	group "Run tests" make test ||
 	handle_failed_tests
+	group "Run unit tests" \
+		make DEFAULT_UNIT_TEST_TARGET=unit-tests-prove unit-tests
 fi
 check_unignored_build_artifacts
 
diff --git a/ci/run-test-slice.sh b/ci/run-test-slice.sh
index a3c67956a8..ae8094382f 100755
--- a/ci/run-test-slice.sh
+++ b/ci/run-test-slice.sh
@@ -15,4 +15,9 @@ group "Run tests" make --quiet -C t T="$(cd t &&
 	tr '\n' ' ')" ||
 handle_failed_tests
 
+# We only have one unit test at the moment, so run it in the first slice
+if [ "$1" == "0" ] ; then
+	group "Run unit tests" make --quiet -C t unit-tests-prove
+fi
+
 check_unignored_build_artifacts
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v10 1/3] unit tests: Add a project plan document
  2023-11-09 18:50     ` [PATCH v10 1/3] unit tests: Add a project plan document Josh Steadmon
@ 2023-11-09 23:15       ` Junio C Hamano
  0 siblings, 0 replies; 67+ messages in thread
From: Junio C Hamano @ 2023-11-09 23:15 UTC (permalink / raw)
  To: Josh Steadmon; +Cc: git, phillip.wood123, oswald.buddenhagen, christian.couder

Josh Steadmon <steadmon@google.com> writes:

> In our current testing environment, we spend a significant amount of
> effort crafting end-to-end tests for error conditions that could easily
> be captured by unit tests (or we simply forgo some hard-to-setup and
> rare error conditions). Describe what we hope to accomplish by
> implementing unit tests, and explain some open questions and milestones.
> Discuss desired features for test frameworks/harnesses, and provide a
> comparison of several different frameworks. Finally, document our
> rationale for implementing a custom framework.
>
> Co-authored-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Josh Steadmon <steadmon@google.com>
> ---
>  Documentation/Makefile                 |   1 +
>  Documentation/technical/unit-tests.txt | 240 +++++++++++++++++++++++++
>  2 files changed, 241 insertions(+)
>  create mode 100644 Documentation/technical/unit-tests.txt

Looks good.  I'll downcase "Add" on the title to match what I have
in my tree, but otherwise it looks OK to me.  Let's see if we can
mark this round ready for 'next' and check if we hear complaints.

I have to make sure I do not forget about the other topic that
builds on top of this one.

Thanks.


^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2023-11-09 23:15 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20230517-unit-tests-v2-v2-0-8c1b50f75811@google.com>
2023-06-30 22:51 ` [PATCH v4] unit tests: Add a project plan document Josh Steadmon
2023-07-01  0:42   ` Junio C Hamano
2023-07-01  1:03   ` Junio C Hamano
2023-08-07 23:07   ` [PATCH v5] " Josh Steadmon
2023-08-14 13:29     ` Phillip Wood
2023-08-15 22:55       ` Josh Steadmon
2023-08-17  9:05         ` Phillip Wood
2023-08-16 23:50   ` [PATCH v6 0/3] Add unit test framework and project plan MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Josh Steadmon
2023-08-16 23:50     ` [PATCH v6 1/3] unit tests: Add a project plan document Josh Steadmon
2023-08-16 23:50     ` [PATCH v6 2/3] unit tests: add TAP unit test framework Josh Steadmon
2023-08-17  0:12       ` Junio C Hamano
2023-08-17  0:41         ` Junio C Hamano
2023-08-17 18:34           ` Josh Steadmon
2023-08-16 23:50     ` [PATCH v6 3/3] ci: run unit tests in CI Josh Steadmon
2023-08-17 18:37   ` [PATCH v7 0/3] Add unit test framework and project plan Josh Steadmon
2023-08-17 18:37     ` [PATCH v7 1/3] unit tests: Add a project plan document Josh Steadmon
2023-08-17 18:37     ` [PATCH v7 2/3] unit tests: add TAP unit test framework Josh Steadmon
2023-08-18  0:12       ` Junio C Hamano
2023-09-22 20:05         ` Junio C Hamano
2023-09-24 13:57           ` phillip.wood123
2023-09-25 18:57             ` Junio C Hamano
2023-10-06 22:58             ` Josh Steadmon
2023-10-09 17:37         ` Josh Steadmon
2023-08-17 18:37     ` [PATCH v7 3/3] ci: run unit tests in CI Josh Steadmon
2023-08-17 20:38     ` [PATCH v7 0/3] Add unit test framework and project plan Junio C Hamano
2023-08-24 20:11     ` Josh Steadmon
2023-09-13 18:14       ` Junio C Hamano
2023-10-09 22:21   ` [PATCH v8 " Josh Steadmon
2023-10-09 22:21     ` [PATCH v8 1/3] unit tests: Add a project plan document Josh Steadmon
2023-10-10  8:57       ` Oswald Buddenhagen
2023-10-11 21:14         ` Josh Steadmon
2023-10-11 23:05           ` Oswald Buddenhagen
2023-11-01 17:31             ` Josh Steadmon
2023-10-27 20:12       ` Christian Couder
2023-11-01 17:47         ` Josh Steadmon
2023-11-01 23:49           ` Junio C Hamano
2023-10-09 22:21     ` [PATCH v8 2/3] unit tests: add TAP unit test framework Josh Steadmon
2023-10-11 21:42       ` Junio C Hamano
2023-10-16 13:43       ` [PATCH v8 2.5/3] fixup! " Phillip Wood
2023-10-16 16:41         ` Junio C Hamano
2023-11-01 17:54           ` Josh Steadmon
2023-11-01 23:48             ` Junio C Hamano
2023-11-01 17:54         ` Josh Steadmon
2023-11-01 23:49           ` Junio C Hamano
2023-10-27 20:15       ` [PATCH v8 2/3] " Christian Couder
2023-11-01 22:54         ` Josh Steadmon
2023-10-09 22:21     ` [PATCH v8 3/3] ci: run unit tests in CI Josh Steadmon
2023-10-09 23:50     ` [PATCH v8 0/3] Add unit test framework and project plan Junio C Hamano
2023-10-19 15:21       ` [PATCH 0/3] CMake unit test fixups Phillip Wood
2023-10-19 15:21         ` [PATCH 1/3] fixup! cmake: also build unit tests Phillip Wood
2023-10-19 15:21         ` [PATCH 2/3] fixup! artifacts-tar: when including `.dll` files, don't forget the unit-tests Phillip Wood
2023-10-19 15:21         ` [PATCH 3/3] fixup! cmake: handle also unit tests Phillip Wood
2023-10-19 19:19         ` [PATCH 0/3] CMake unit test fixups Junio C Hamano
2023-10-16 10:07     ` [PATCH v8 0/3] Add unit test framework and project plan phillip.wood123
2023-11-01 23:09       ` Josh Steadmon
2023-10-27 20:26     ` Christian Couder
2023-11-01 23:31   ` [PATCH v9 " Josh Steadmon
2023-11-01 23:31     ` [PATCH v9 1/3] unit tests: Add a project plan document Josh Steadmon
2023-11-01 23:31     ` [PATCH v9 2/3] unit tests: add TAP unit test framework Josh Steadmon
2023-11-03 21:54       ` Christian Couder
2023-11-09 17:51         ` Josh Steadmon
2023-11-01 23:31     ` [PATCH v9 3/3] ci: run unit tests in CI Josh Steadmon
2023-11-09 18:50   ` [PATCH v10 0/3] Add unit test framework and project plan Josh Steadmon
2023-11-09 18:50     ` [PATCH v10 1/3] unit tests: Add a project plan document Josh Steadmon
2023-11-09 23:15       ` Junio C Hamano
2023-11-09 18:50     ` [PATCH v10 2/3] unit tests: add TAP unit test framework Josh Steadmon
2023-11-09 18:50     ` [PATCH v10 3/3] ci: run unit tests in CI Josh Steadmon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).